I know how to add data to the firebase database, I know how to upload image to the firebase storage. I am doing this in javascript.
I am not able to figure out how to link the image to my database object.
My database object is something like this
{
name:'alex',
age:23,
profession:'superhero'
image:*what to put here ???*
}
One idea is to use the object reference that is created and store the image using the same reference.
Any tutorial or ideas appreciated.
We often recommend storing the gs://bucket/path/to/object reference, otherwise store the https://... URL.
See Zero to App and it's associated source code (here we use the https://... version) for a practical example.
var storageRef = firebase.storage().ref('some/storage/bucket');
var saveDataRef = firebase.database().ref('users/');
var uploadTask = storageRef.put(file);
uploadTask.on('state_changed', uploadTick, (err)=> {
console.log('Upload error:', err);
}, ()=> {
saveDataRef.update({
name:'alex',
age:23,
profession:'superhero'
image:uploadTask.snapshot.downloadURL
});
});
This upload function sits inside a ES6 class and is passed a callback (uploadTick) for the .on('state_changed') from the calling component. That way you can pass back upload status and display that in the UI.
uploadTick(snap){
console.log("update ticked", snap)
this.setState({
bytesTransferred:snap.bytesTransferred,
totalBytes:snap.totalBytes
})
}
The file to upload is passed to this upload function, but I am just using a react form upload to get the file and doing a little processing on it, your approach may vary.
Related
After attempting to delete a file I get the notification that it was successful. However, when I check the storage console page the document is still there.
If I manually specify the name of a file that was previously uploaded then that one will be deleted successfully.
Any idea what I'm doing wrong here? Is it somehow related to the fact that I just uploaded the file and now I'm deleting it? Is there some kind of cache on the storage ref that I have to query again in order for it to know that the new file exists?
I am following the documentation provided by Google here.
https://firebase.google.com/docs/storage/web/delete-files
Here is my code
const storage = firebase.storage();
const picture = storage.ref().child('images/' + file.name);
picture
.delete()
.then(function () {
notification.success({
message: 'File Deleted',
placement: 'bottomLeft',
});
})
.catch(function (error) {
console.error(error);
notification.warning({
message: 'There was an error',
placement: 'bottomLeft',
});
});
Update
When I say, "manually specifying the file" I mean that I can do this
const picture = storage.ref().child('images/alreadyUploadedImage.png');
Then run the same code to delete. This is why I asked about caching since it seems that if I reload my browser session by changing this text in my code that I can now delete a file. Also, this doesn't work for the file I just uploaded (before I refresh my browser). If I change the name in the code to, 'images/image.png' and then upload an image with that name and then immediately try to delete it, it doesn't work. But if I then refresh the browser and add another image, then delete that one, the 'image.png' file is gone from storage.
Here is a gif showing the file and firebase storage after the delete is complete on the client.
It turns out I was calling the put method again after I was calling the delete method. I'm using AntD and the Upload component, which has getValueFromEvent={normFile}. This, "normFile" function gets called every time a file is uploaded or removed. So I just had to return from that function if the event was, event.status === 'removed'.
My question... how do I upload files using FilePond but onclick, not automatically like it does out of the box? Also, since I need to take a number of other actions with theses images (like display them for review prior to upload) and since I need to add other data to the FormData that gets sent (plus dispatch actions to Redux).
Normally I would create a FormObject append the files and other values to it before POSTing it to some endpoint (with any custom headers needed). However when I inspected the FilePond instance it seems like the only thing I have access to is a blob... not the actual files. Is this accurate? Do I need to follow some special FilePond specific technique to get file upload to work?
FilePond's docs have a custom config value called "server" that appears to have access to an actual file in the more advanced examples so is this the way it must be done? I can't just grab the files (from somewhere that I do not currently see on the FilePond instance) and append them to an object for use in my normal "service"?
Any tips are appreciated. In a React app I want to upload a variable number of files onclick after appending other form data and setting headers (using Axios, ideally) and POST these files to an API.
Example from their docs uses a prop like:
server="/api"
I want something like (fake code):
server={submitImages} <-- this should only happen onclick of some button
where:
submitImages = (fieldName, file) => {
const formData = new FormData()
formData.append(fieldName, file, file.name)
formData.append('foo', this.props.foo)
const docuploadresult = this.props.uploadDocs(formData) <-- a service that lives elsewhere and actually does the POST
docuploadresult.then(result => {
// success
}, error => {
// error
})
}
and my problems are that I don't see why this needs to happen in some special config object like server, don't see how to make this happen onclick, don't see an actual file anywhere.
I may be overthinking this?
FilePond offers the server property so it can handle the uploads for your. But this is not required, you can use getFiles to easily request all file items (and File objects) in FilePond and upload them yourself.
Add your own submit button to the form and use submitImages below to submit the files.
submitImages = (fieldName) => {
const formData = new FormData();
this.filepondRef.getFiles()
.map(fileItem => fileItem.file)
.forEach(file => {
formData.append(fieldName, file, file.name);
});
// upload here
}
If you want to show image previews you can add the image preview plugin.
https://pqina.nl/filepond/docs/patterns/plugins/image-preview/
The Put method for the firebase storage seems to only take one file at a time. How do I get this to work with multiple files ? I am trying to wait for each upload to finish and collect a download url for each, then proceed and save these urls in an array in a node in the realtime database, but I can't seem to figure the best way to handle this.
I wrote a GitHub gist of this:
// set it up
firebase.storage().ref().constructor.prototype.putFiles = function(files) {
var ref = this;
return Promise.all(files.map(function(file) {
return ref.child(file.name).put(file);
}));
}
// use it!
firebase.storage().ref().putFiles(files).then(function(metadatas) {
// Get an array of file metadata
}).catch(function(error) {
// If any task fails, handle this
});
Is there a simple way to get the download URL of a file uploaded to Firebase?
(I've tried playing around with the snapshot returned by my upload function and couldn't find anything...)
fileref.put(file).then(function(snapshot){
self.addEntry(snapshot);
/// return snapshot.url???
});
The documentation referenced by Yamil - Firebase javascript SDK file upload - recommends using the snapshot's ref property to invoke the getDownloadURL method, which returns a promise containing the download link:
Using your code as a starting point:
fileref.put(file)
.then(snapshot => {
return snapshot.ref.getDownloadURL(); // Will return a promise with the download link
})
.then(downloadURL => {
console.log(`Successfully uploaded file and got download link - ${downloadURL}`);
return downloadURL;
})
.catch(error => {
// Use to signal error if something goes wrong.
console.log(`Failed to upload file and get link - ${error}`);
});
I know it seems like unnecessary effort and that you should be able to get the link via a property of the snapshot, but this is what the Firebase team recommends - they probably have a good reason for doing it this way.
To get the Url created by default from the snapshot you can use downloadURL, meaning snapshot.downloadURL.
Remember to always keep tracking the progress of the upload by using .on('state_changed'), Documentation here.
When using user based security rules as given in official docs
// Only an individual user can write to "their" images
match /{userId}/{imageId} {
allow write: if request.auth.uid == userId;
}
url retrieved by snapshot.downloadURL is exposing userId. How to overcome this security risk.
Good Day.
Been wracking the 'ol noggin for a way to solve this.
In a nutshell, I have a form that has a number of text inputs as well as an input file element to upload said file to AWS S3 (via lepozepo:s3 ver5.1.4 package). The nice thing about this package is that it does not need the server, thus keeping resources in check.
This S3 package uploads the file to my configured bucket and returns the URL to access the image among a few other data points.
So, back to the form. I need to put the AWS URL returned into the database along with the other form data. HOWEVER, the S3 call takes more time than what the app waits for since it is async, thus the field within my post to Meteor.call() is undefined only because it hasn't waited long enough to get the AWS URL.
I could solve this by putting the Meteor.call() right into the callback of the S3 call. However, I was hoping to avoid that as I'd much rather have the S3 upload be its own Module or helper function or even a function outside of any helpers as it could be reused in other areas of the app for file uploads.
Psudo-code:
Template.contacts.events({
'submit #updateContact': function(e,template){
s3.upload({file:inputFile, path:client},function(error,result){
if(error){
// throw error
}else{
var uploadInfo = result;
}
});
formInfo = {name:$('[name=name]').val(),file:uploadInfo}; // <= file is undefined because S3 hasn't finished yet
Meteor.call('serverMethod',formInfo, function(e,r){
if(e){
// throw error message
}else{
// show success message
}
});
});
I could put the formInfo and the Meteor.call() in the s3 callback, but that would result in more complex code and less code reuse where IMO this is a perfect place for code reuse.
I've tried wrapping the s3 in it's own function with and without a callback. I've tried using reactiveVars. I would think that updating the db another time with just the s3 file info would make the s3 abstraction more complex as it'd need to know the _id and such...
Any ideas?
Thanks.
If you are using javascript it's best to embrace callbacks!
What is it about using callbacks like this that you do not like, or believe is modular or reusable?
As shown below the uploader function does nothing but wrap s3.upload. But you mention this is psudeocode, so I presume that you left out logic you want included in the modular call to s3.upload (include it here), but decouple the logic around handling the response (pass in a callback).
uploader = function(s3_options, cb) {
s3.upload(s3_options, function(error,result){
if(error){
cb(error);
}else{
cb(null, result);
}
});
};
Template.contacts.events({
'submit #updateContact': function(e,template){
cb = function(error, uploadInfo) {
formInfo = {name:$('[name=name]').val(),file:uploadInfo};
Meteor.call('serverMethod',formInfo, function(e,r){
if(e){
// throw error message
}else{
// show success message
}
});
uploader({file:inputFile, path:client}, cb); // you don't show where `inputFile` or `client` come from
}
});