So the new Firebase has support for storage using Google Cloud Platform.
You can upload a file to the images folder using:
var uploadTask = storageRef.child('images').put(file, metadata);
What if you want to create a subfolder images/user1234 dynamically using code?
The offical sample does not show how to do that, nor the official guide or reference docs.
Is the Firebase Console the only place where folders can be created manually?
The Firebase Storage API dynamically creates "folders" as intermediate products: if you create a file at images/user1234/file.txt, all intermediate "folders" like "images" and "user1234" will be created along the way. So your code becomes:
var uploadTask = storageRef.child('images/user1234/file.txt').put(file, metadata);
Note that you need to include the file name (foo.txt for example) in the child() call, since the reference should include the full path as well as the file name, otherwise your file will be called images.
The Firebase Console does allow you to create a folder, since it's the easiest way to add files to a specific folder there.
But there is no public API to create a folder. Instead folders are auto-created as you add files to them.
You most certainly can create directories... with a little bit of playing with the references I did the following.
test = (e,v) => {
let fileName = "filename"
let newDirectory = "someDir"
let storage = firebase.storage().ref(`images/${newDirectory}/${fileName}`)
let file = e.target.files[0]
if(file !== undefined && file.type === "image/png" ) {
storage.put(file)
.then( d => console.log('you did it'))
.catch( d => console.log("do something"))
}
}
String myFolder = "MyImages";
StorageReference riversRef = storageReference.child(myFolder).child("images/pic.jpg");
Firebase is lacking very important functionality, there's always the need to be doing some tricks to emulate behaviours that should be standard.
If you create a folder manually from the Firebase console it will
persist even when there are no more files in it.
If you create a folder dynamically and all files get deleted at some
point, the folder will disappear and be deleted as well.
I implemented a file manager using Firebase Storage so when a user wants to upload a file he can do it through this interface not from something external to the app as is the Firebase Console. You want to give the user the option to reorganize the files as he wants, but something as common as creating a new folder cannot be done without tricks, why? just because this is Firebase.
So in order to emulate this behaviour what I came up with was the following:
Create a reference with the new folder name.
Create a reference for a "ghost" file as child of the folder's reference and give it always the same fixed name, eg. '.ghostfile'
Upload the file to this newly created folder. Any method is valid, I just use uploadString.
Every time I list the files of a reference, exclude any file named as before. So this "ghost" file is not shown in the file manager.
So an example to create a foler:
async function createFolder (currentRef: StorageReference, folderName: string) {
const newDir = ref(currentRef, name)
const ghostFile = ref(newDir, '.ghostfile')
await uploadString(ghostFile, '')
}
And an example to list the files:
async function loadLists (ref: StorageReference) {
const { prefixes, items } = await listAll(ref)
return {
directories: prefixes,
files: items.filter(file => file.name !=== '.ghostfile')
}
}
Firebase console allows you to create a folder. I don't think there is another way to create a folder.
Related
I am writing an automator workflow to work with files and folders. I’m writing it in JavaScript as I’m more familiar with it.
I would like to receive a folder, and get the folder’s name as well as the files inside.
Here is roughly what I have tried:
Window receives current folders in Finder (I’m only interested in the first and only folder)
Get Folder Contents
JavaScript:
function run(input,parameters) {
var files = [];
for(let file of input) files.push(file.toString().replace(/.*\//,''));
// etc
}
This works, but I don’t have the folder name. Using this, I get the full path name of each file, which is why I run it through the replace() method.
If I omit step 2 above, I get the folder, but I don’t know how to access the contents of the folder.
I can fake the folder by getting the first file and stripping off the file name, but I wonder whether there is a more direct approach to getting both the folder and its contents.
I’ve got it working. In case anybody has a similar question:
// Window receives current folders in Finder
var app = Application.currentApplication()
app.includeStandardAdditions = true
function run(input, parameters) {
let directory = input.toString();
var directoryItems = app.listFolder(directory, { invisibles: false })
var files = [];
for(let file of directoryItems) files.push(file.toString().replace(/.*\//,'')) ;
// etc
}
I don’t include the Get Folder Contents step, but iterate through the folder using app.listFolder() instead. The replace() method is to trim off everything up to the last slash, giving the file’s base name.
In a react project I am trying to list all the files in blob storage using a sas token created for the given directory. My understanding is I need to create a DataLakeFileSystemClient but I only have a url for the directory and a DataLakeDirectoryClient and somehow need to create the DataLakeFileSystemClient.
The url passed is something along the lines of: https://myaccount.dfs.core.windows.net/mycontainer/mydirectory{sastoken}
I have found a way to do this, although I don't know if it's the best way.
To get the directory client to a FileSystem client I wrote a helper method
const getFileSystemClient = (directoryClient: DataLakeDirectoryClient) => {
const url = new URL(directoryClient.url);
url.pathname = directoryClient.fileSystemName;
return new DataLakeFileSystemClient(url.toString());
}
To list directories I use the following code
const fsClient = getFileSystemClient(directoryClient);
for await(const path of fsClient.listPaths({path: directoryClient.name})) {
console.log(path.name);
}
I am launching a cloud function in order to replicate one register I have in Firestore. One of the fields is an image, and the function first tries to copy the image and then duplicate the register.
This is the code:
export async function copyContentFunction(data: any, context: any): Promise<String> {
if (!context.auth || !context.auth.token.isAdmin) {
throw new functions.https.HttpsError('unauthenticated', 'Auth error.');
}
const id = data.id;
const originalImage = data.originalImage;
const copy = data.copy;
if (id === null || originalImage === null || copy === null) {
throw new functions.https.HttpsError('invalid-argument', 'Missing mandatory parameters.');
}
console.log(`id: ${id}, original image: ${originalImage}`);
try {
// Copy the image
await admin.storage().bucket('content').file(originalImage).copy(
admin.storage().bucket('content').file(id)
);
// Create new content
const ref = admin.firestore().collection('content').doc(id);
await ref.set(copy);
return 'ok';
} catch {
throw new functions.https.HttpsError('internal', 'Internal error.');
}
}
I have tried multiple combinations but this code always fail. For some reason the process of copying the image is failing, I am doing anything wrong?
Thanks.
Using the copy() method in a Cloud Function should work without problem. You don't share any detail about the error you get (I recommend to use catch(error) instead of just catch) but I can see two potential problems with your code:
The file corresponding to originalImage does not exist;
The content bucket does not exists in your Cloud Storage instance.
The second problem usually comes from the common mistake of mixing up the concepts of buckets and folders (or directories) in Cloud Storage.
Actually Google Cloud Storage does not have genuine "folders". In the Cloud Storage console, the files in your bucket are presented in a hierarchical tree of folders (just like the file system on your local hard disk) but this is just a way of presenting the files: there aren't genuine folders/directories in a bucket. The Cloud Storage console just uses the different parts of the file paths to "simulate" a folder structure, by using the "/" delimiter character.
This doc on Cloud Storage and gsutil explains and illustrates very well this "illusion of a hierarchical file tree".
So, if you want to copy a file from your default bucket to a content "folder", do as follows:
await admin.storage().bucket().file(`content/${originalImage}`).copy(
admin.storage().bucket().file(`content/${id}`)
);
With the new Firebase API you can upload files into cloud storage from client code. The examples assume the file name is known or static during upload:
// Create a root reference
var storageRef = firebase.storage().ref();
// Create a reference to 'mountains.jpg'
var mountainsRef = storageRef.child('mountains.jpg');
// Create a reference to 'images/mountains.jpg'
var mountainImagesRef = storageRef.child('images/mountains.jpg');
or
// File or Blob, assume the file is called rivers.jpg
var file = ...
// Upload the file to the path 'images/rivers.jpg'
// We can use the 'name' property on the File API to get our file name
var uploadTask = storageRef.child('images/' + file.name).put(file);
With users uploading their own files, name conflicts are going to be an issue. How can you have Firebase create a filename instead of defining it yourself? Is there something like the push() feature in the database for creating unique storage references?
Firebase Storage Product Manager here:
TL;DR: Use a UUID generator (in Android (UUID) and iOS (NSUUID) they are built in, in JS you can use something like this: Create GUID / UUID in JavaScript?), then append the file extension if you want to preserve it (split the file.name on '.' and get the last segment)
We didn't know which version of unique files developers would want (see below), since there are many, many use cases for this, so we decided to leave the choice up to developers.
images/uuid/image.png // option 1: clean name, under a UUID "folder"
image/uuid.png // option 2: unique name, same extension
images/uuid // option 3: no extension
It seems to me like this would be a reasonable thing to explain in our documentation though, so I'll file a bug internally to document it :)
This is the solution for people using dart
Generate the current date and time stamp using:-
var time = DateTime.now().millisecondsSinceEpoch.toString();
Now upload the file to the firebase storage using:-
await FirebaseStorage.instance.ref('images/$time.png').putFile(yourfile);
You can even get the downloadable url using:-
var url = await FirebaseStorage.instance.ref('images/$time.png').getDownloadURL();
First install uuid - npm i uuid
Then define the file reference like this
import { v4 as uuidv4 } from "uuid";
const fileRef = storageRef.child(
`${uuidv4()}-${Put your file or image name here}`
);
After that, upload with the file with the fileRef
fileRef.put(Your file)
In Android (Kotlin) I solved by combining the user UID with the milliseconds since 1970:
val ref = storage.reference.child("images/${auth.currentUser!!.uid}-${System.currentTimeMillis()}")
code below is combination of file structure in answer from #Mike McDonald , current date time stamp in answer from # Aman Kumar Singh , user uid in answer from #Damien : i think it provides unique id, while making the firebase storage screen more readable.
Reference ref = firebaseStorage
.ref()
.child('videos')
.child(authController.user.uid)
.child(DateTime.now().millisecondsSinceEpoch.toString());
I’m using gulp and nunjucks to automate some basic email templating tasks.
I have a chain of tasks which can be triggered when an image is added to the images folder e.g.:
images compressed
new image name and dimensions logged to json file
json image data then used to populate template when template task is run
So far so good.
I want to be able to define a generic image file path for each template which will then concatenate to each image name (as stored in the json file). So something like:
<img src="{{data.path}}{{data.src}}" >
If I want to nominate a distinct folder to contain the images for each template generated then cloudinary requires a mandatory unique version component to be applied in the file path. So the image path can never be consistent throughout a template.
if your public ID includes folders (elements divided by '/'), the
version component is mandatory, (but you can make it shorter. )
For example:
http://res.cloudinary.com/demo/image/upload/v1312461204/sample_email/hero_image.jpg
http://res.cloudinary.com/demo/image/upload/v1312461207/sample_email/footer_image.jpg
Same folder. Different path.
So it seems I would now need to create a script/task that can log and store each distinct file path (with its unique id generated by cloudinary) for every image any time an image is uploaded or updated and then rerun the templating process to publish them.
This just seems like quite a convoluted process so if there’s an easier approach I’d love to know?
Else if that really is the required route it would great if someone could point me to an example of the kind of script that achieves something similar.
Presumably some hosting services will not have the mandatory unique key which makes life easier. I have spent some time getting to know cloudinary and it’s a free service with a lot of scope so I guess I'm reluctant to abandon ship but open to all suggestions.
Thanks
Note that the version component (e.g., v1312461204) isn't mandatory anymore for most use-cases. The URL could indeed work without it, e.g.,:
http://res.cloudinary.com/demo/image/upload/sample_email/hero_image.jpg
Having said that, it is very recommended to include the version component in the URL in cases where you'd like to update the image with a new one while keeping the exact same public ID. In that case, if you'd access the exact same URL, you might get a CDN cached version of the image, which may be the old one.
Therefore, when you upload, you can get the version value from Cloudinary's upload response, and store it in your DB, and the next time you update your image, also update the URL with the new version value.
Alternatively, you can also ask Cloudinary to invalidate the image while uploading. Note that while including the version component "busts" the cache immediately, invalidation may take a while to propagate through the CDN. For more information:
http://cloudinary.com/documentation/image_transformations#image_versions
This is the solution I came up with. It's based on adapting the generic script I use to upload images from a folder to cloudinary and now stores the updated file paths from cloudinary and generates a json data file to publish the hosted src details to a template.
I'm sure it could be a lot better semantically so welcome any revisions offered if someone stumbles on this but it seems to do the job:
// points to the config file where we are defining file paths
var path = require('./gulp.path')();
// IMAGE HOSTING
var fs = require('fs'); // !! not installed !! Not required??
var cloudinary = require('cloudinary').v2;
var uploads = {};
var dotenv = require('dotenv');
dotenv.load();
// Finds the images in a specific folder and retrurns an array
var read = require('fs-readdir-recursive');
// Set location of images
var imagesInFolder = read(path.images);
// The array that will be populated with image src data
var imgData = new Array();
(function uploadImages(){
// Loop through all images in folder and upload
for(var i = 0; i < imagesInFolder.length;i++){
cloudinary.uploader.upload(path.images + imagesInFolder[i], {folder: path.hosted_folder, use_filename: true, unique_filename: false, tags: 'basic_sample'}, function(err,image){
console.log();
console.log("** Public Id");
if (err){ console.warn(err);}
console.log("* Same image, uploaded with a custom public_id");
console.log("* "+image.public_id);
// Generate the category title for each image. The category is defined within the image name. It's the first part of the image name i.e. anything prior to a hyphen:
var title = image.public_id.substr(image.public_id.lastIndexOf('/') + 1).replace(/\.[^/.]+$/, "").replace(/-.*$/, "");
console.log("* "+title);
console.log("* "+image.url);
// Add the updated src for each image to the output array
imgData.push({
[title] : {"src" : image.url}
});
// Stringify data with no spacing so .replace regex can easily remove the unwanted curly braces
var imgDataJson = JSON.stringify(imgData, null, null);
// Remove the unwanted [] that wraps the json imgData array
var imgDataJson = imgDataJson.substring(1,imgDataJson.length-1);
// Delete unwanted braces "},{" replace with "," otherwise what is output is not valid json
var imgDataJson = imgDataJson.replace(/(},{)/g, ',');
var outputFilename = "images2-hosted.json"
// output the hosted image path data to a json file
// (A separate gulp task is then run to merge and update the new 'src' data into an existing image data json file)
fs.writeFile(path.image_data_src + outputFilename, imgDataJson, function(err) {
if(err) {
console.log(err);
} else {
console.log("JSON saved to " + outputFilename);
}
});
});
}
})();
A gulp task is then used to merge the newly generated json to overide the existing json data file:
// COMPILE live image hosting data
var merge = require('gulp-merge-json');
gulp.task('imageData:comp', function() {
gulp
.src('src/data/images/*.json')
.pipe(merge('src/data/images.json'))
.pipe(gulp.dest('./'))
.pipe(notify({ message: 'imageData:comp task complete' }));
});