I have a products collection and I want to allow users to add multiple images to them.
The catch is I want images to upload instantly but because the product isnt saved yet the images can't be embedded or joined by foreign key.
Is it possible to store the images clientside then after the product gets saved. I add the images to the product database?
How should I solve this?
Thanks
var imageStore = new FS.Store.GridFS("images");
Images = new FS.Collection("images", {
stores: [
new FS.Store.FileSystem("original")
],
filter: {allow: {contentTypes: ['image/*']}}
});
In Meteor, images will upload "instantly" due to the "db everywhere" paradigm,ie they will be stored optimistically in the MiniMongo client-side db and then sent to the server. If they are rejected by the server, an error should be generated re the reason for the rejection (in your server logs.)
For code samples of how to upload files using CollectionFS, take a look at these code samples at the CollectionFS wiki.
This one, for example, seems to be what you are looking for:
Template.myForm.events({
'change .myFileInput': function(event, template) {
FS.Utility.eachFile(event, function(file) {
Images.insert(file, function (err, fileObj) {
//If !err, we have inserted new doc with ID fileObj._id, and
//kicked off the data upload using HTTP
});
});
}
});
Once the files are have been stored in the client db, you should be able to display them instantly, such as using this method.
If this doesn't help, please provide a link to a repo or more details re. the issue, such as any errors that are being generated, esp server-side errors.
Related
After attempting to delete a file I get the notification that it was successful. However, when I check the storage console page the document is still there.
If I manually specify the name of a file that was previously uploaded then that one will be deleted successfully.
Any idea what I'm doing wrong here? Is it somehow related to the fact that I just uploaded the file and now I'm deleting it? Is there some kind of cache on the storage ref that I have to query again in order for it to know that the new file exists?
I am following the documentation provided by Google here.
https://firebase.google.com/docs/storage/web/delete-files
Here is my code
const storage = firebase.storage();
const picture = storage.ref().child('images/' + file.name);
picture
.delete()
.then(function () {
notification.success({
message: 'File Deleted',
placement: 'bottomLeft',
});
})
.catch(function (error) {
console.error(error);
notification.warning({
message: 'There was an error',
placement: 'bottomLeft',
});
});
Update
When I say, "manually specifying the file" I mean that I can do this
const picture = storage.ref().child('images/alreadyUploadedImage.png');
Then run the same code to delete. This is why I asked about caching since it seems that if I reload my browser session by changing this text in my code that I can now delete a file. Also, this doesn't work for the file I just uploaded (before I refresh my browser). If I change the name in the code to, 'images/image.png' and then upload an image with that name and then immediately try to delete it, it doesn't work. But if I then refresh the browser and add another image, then delete that one, the 'image.png' file is gone from storage.
Here is a gif showing the file and firebase storage after the delete is complete on the client.
It turns out I was calling the put method again after I was calling the delete method. I'm using AntD and the Upload component, which has getValueFromEvent={normFile}. This, "normFile" function gets called every time a file is uploaded or removed. So I just had to return from that function if the event was, event.status === 'removed'.
I am currently trying to make an Image Uploader using Vue 2, Vuetify and Firebase/Firestore. At the moment, my images upload successfully to Firestore and the reference download URL is stored in an array called "images" but they are not stored in order like they appear so on my frontend.
This is currently how the frontend looks like with files selected to be uploaded:
When pressing Submit Presentation, I have it print the files in the console:
https://i.imgur.com/GSQqYtM.png
As you can see, they print in order, but another issue that arises is that the images don't get to sent to the Firebase database, but they still get uploaded to Firestore. For it to show up in the Firebase database, I would have to press the Submit Presentation button again or else the array in Firebase will just show up as images: []. This is a whole other issue I assume but it might help with figuring out the main issue.
When I press Submit Presentation again, the images array gets updated with each download URL, but not in order:
It says Slide1 then Slide3, Slide12, Slide9, etc. I have no idea why this is happening. Even in Firestore, they are not in order. It's a complete different order from the one in the Firebase database.
Here is how I handle the file uploading when the Submit Presentation button is pressed:
uploadImages() {
if (this.files) {
this.files.forEach(file => {
var storageRef = fb.storage().ref('presentations/' + file.name);
//storageRef.put(file);
let uploadTask = storageRef.put(file);
uploadTask.on('state_changed', (snapshot) => {
}, (error) => {
//Errors handled her
}, () => {
uploadTask.snapshot.ref.getDownloadURL().then((downloadURL) => {
this.presentation.images.push(downloadURL);
})
})
console.log(file);
db.collection("presentations").doc("mainPresentation").update(this.presentation)
})
}
}
I just want the images to be stored in order like how they are selected, because as you probably noticed, these are presentation images, so when I have to read from Firebase, I don't want the presentation images being displayed in a different order.
Would appreciate any help, thanks!
Two things.
Firstly, you are kicking off all these uploads asynchronously, simultaneously. Since they are all happening at the same time, they could finish in a different order. If you need them to happen in order, then you should wait for the first one to complete before kicking off the second one. I suggest not worrying about it, and sort the data afterward as needed.
Also, the sort order of the file that you see in the Cloud Storage console is lexicographic, not numeric. Lexicographically, "11" comes before "2", because the ascii value for the first character "1" is less than "2". If you really want to see the files sorted numerically, you should pad the numbers with zeroes so that the lexical sort matches the numeric sort. For example, "001", "002", "003" and so on.
So I was reading a lot about how to actually store and fetch data in an efficient way. Basically my application is about time management/capturing for projects. I am very happy for any opinion on which strategy I should use or even suggestions for other strategies. The main concern is about the limited resources for local storage on the different Browsers.
This is the main data I have to store:
db_projects: This is a database where the projects itself are stored.
db_timestamps: Here go the timestamps per project whenever a project is running.
I came up with the following strategies:
1: Storing the status of the project in the timestamps
When a project is started, there is addad a timestamp to db_timestamps like so:
db_timestamps.put({
_id: String(Date.now()),
title: projectID,
status: status //could be: 1=active/2=inactive/3=paused
})...
This follows the strategy to only add data to the db and not modify any entries. The problem I see here is that if I want to get all active projects for example, I would need to query the whole db_timestamp which can contain thousands of entries. Since I can not use the ID to search all active projects, this could result in a quite heavy DB query.
2: Storing the status of the project in db_projects
Each time a project changes it's status, there is a update to the project itself. So the "get all active projects"-query would be much resource friendly, since there are a lot less projects than timestamps. But this would also mean that each time a status change happens, the project entry would be revisioned and therefor would produce "a lot" of overhead. I'm also not sure if the compaction feature would do a good job, since not all revision data is deleted (the documents are, but the leaf revisions not). This means for a state change we have at least the _rev information which is still a string of 34 chars for changing only the status (1 char). Or can I delete the leaf revisions after conflict resolution?
3: Storing the status in a separate DB like db_status
This leads to the same problem as in #2 since status changes lead to revisions on this DB. Or if the states would be added in "only add data"-mode (like in #1), it would just quickly fill with entries.
The general problem is that you have a limited amount of space that you could put into indexedDB. On the other hand the principle of ChouchDB is that storage space is cheap (which it is indeed true when you store on the server side only). Here an interesting discussion about that.
So this is the solution that I use for now. I am using a mix between solution 1 and solution 2 from above with the following additions:
Storing only the timesamps in a synced Database (db_timestamps) with the "only add data" principle.
Storing the projects and their states in a separate local (not
synced) database (db_projects). Therefor I still use pouchDB since
it has a lot simpler API than indexedDB.
Storing the new/changed
project status in each timestamp aswell (so you could rebuild db_projects
out of db_timestams if needed)
Deleting db_projects every so often and repopulate it, so the
revision data (overhead for this db in my case) is eliminated and the size is acceptable.
I use the following code to rebuild my DB:
//--------------------------------------------------------------------
function rebuild_db_project(){
db_project.allDocs({
include_docs: true,
//attachments: true
}).then(function (result) {
// do stuff
console.log('I have read the DB and delete it now...');
deleteDB('db_project', '_pouch_DB_Projekte');
return result;
}).then(function (result) {
console.log('Creating the new DB...'+result);
db_project = new PouchDB('DB_Projekte');
var dbContentArray = [];
for (var row in result.rows) {
delete result.rows[row].doc._rev; //delete the revision of the doc. else it would raise an error on the bulkDocs() operation
dbContentArray.push(result.rows[row].doc);
}
return db_project.bulkDocs(dbContentArray);
}).then(function(response){
console.log('I have successfully populated the DB with: '+JSON.stringify(response));
}).catch(function (err) {
console.log(err);
});
}
//--------------------------------------------------------------------
function deleteDB(PouchDB_Name, IndexedDB_Name){
console.log('DELETE');
new PouchDB(PouchDB_Name).destroy().then(function () {
// database destroyed
console.log("pouchDB destroyed.");
}).catch(function (err) {
// error occurred
});
var DBDeleteRequest = window.indexedDB.deleteDatabase(IndexedDB_Name);
DBDeleteRequest.onerror = function(event) {
console.log("Error deleting database.");
};
DBDeleteRequest.onsuccess = function(event) {
console.log("IndexedDB deleted successfully");
console.log(request.result); // should be null
};
}
So I not only use the pouchDB.destroy() command but also the indexedDB.deleteDatabase() command to get the storage freed nearly completely (there is still some 4kB that are not freed, but this is insignificant to me.)
The timings are not really proper but it works for me. I'm happy if somone has an idea to make the timing work properly (The problem for me is that indexedDB does not support promises).
I don't know the best way to handle huge mongo databases with meteorjs.
In my example I have a database collection with addresses in it with the geo location. (the whole code snippets are just examples)
Example:
{
address : 'Some Street',
geoData : [lat, long]
}
Now I have a form where the user can enter an address to get the geo-data. Very simple. But the problem is, that the collection with the geo data has millions of documents in it.
In Meteor you have to publish a collection on Server side and to subscribe on Client and Server side. So my code is like this:
// Client / Server
Geodata = new Meteor.collection('geodata');
// Server side
Meteor.publish('geodata', function(){
return Geodata.find();
});
// Client / Server
Meteor.subscribe('geodata');
Now a person has filled the form - after this I get the data. After this I search for the right document to return. My method is this:
// Server / Client
Meteor.methods({
getGeoData : function (address) {
return Geodata.find({address : address});
}
});
The result is the right one. And this is still working. But my question is now:
Which is the best way to handle this example with a huge database like in my example ? The problem is that Meteor saves the whole collection in the users cache when I subscribed it. Is there a way to subscribe to just the results I need and when the user reused the form then I can overwrite the subscribe? Or is there another good way to save the performance with huge databases and the way I use it in my example?
Any ideas?
Yes, you can do something like this:
// client
Deps.autorun(function () {
// will re subscribe every the 'center' session changes
Meteor.subscribe("locations", Session.get('center'));
});
// server
Meteor.publish('locations', function (centerPoint) {
// sanitize the input
check(centerPoint, { lat: Number, lng: Number });
// return a limited number of documents, relevant to our app
return Locations.find({ $near: centerPoint, $maxDistance: 500 }, { limit: 50 });
});
Your clients would ask only for some subset of the data at the time. i.e. you don't need the entire collection most of the time, usually you need some specific subset. And you can ask server to keep you up to date only to that particular subset. Bare in mind that more different "publish requests" your clients make, more work there is for your server to do, but that's how it is usually done (here is the simplified version).
Notice how we subscribe in a Deps.autorun block which will resubscribe depending on the center Session variable (which is reactive). So your client can just check out a different subset of data by changing this variable.
When it doesn't make sense to ship your entire collection to the client, you can use methods to retrieve data from the server.
In your case, you can call the getGeoData function when the form is filled out and then display the results after the method returns. Try taking the following steps:
Clearly divide your client and server code into their respective client and server directories if you haven't already.
Remove the geodata subscription on the server (only clients can activate subscriptions).
Remove the geodata publication on the server (assuming this isn't needed anymore).
Define the getGeoData method only on the server. It should return an object, not a cursor so use findOne instead of find.
In your form's submit event, do something like:
Meteor.call('getGeoData', address, function(err, geoData){Session.set('geoDataResult', geoData)});
You can then display the geoDataResult data in your template.
I have meteor method that does an insert.
Im using Regulate.js for form validation.
I set the game_id field to Meteor.uuid() to create a unique value that I also route to /game_show/:game_id using iron router.
As you can see I'm logging the details of the game, this works fine. (image link to log below)
file: /lib/methods.js
Meteor.methods({
create_game_form : function(data){
Regulate.create_game_form.validate(data, function (error, data) {
if (error) {
console.log('Server side validation failed.');
} else {
console.log('Server side validation passed!');
// Save data to database or whatever...
//console.log(data[0].value);
var new_game = {
game_id: Meteor.uuid(),
name : data[0].value,
game_type: data[1].value,
creator_user_id: Meteor.userId(),
user_name: Meteor.user().profile.name,
created: new Date()
};
console.log("NEW GAME BEFORE INSERT: ", new_game);
GamesData.insert(new_game, function(error, new_id){
console.log("GAMES NEW MONGO ID: ", new_id)
var game_data = GamesData.findOne({_id: new_id});
console.log('NEW GAME AFTER INSERT: ', game_data);
Session.set('CURRENT_GAME', game_data);
});
}
});
}
});
All of the data coming out of the console.log at this point works fine
After this method call the client routes to /game_show/:game_id
Meteor.call('create_game_form', data, function(error){
if(error){
return alert(error.reason);
}
//console.log("post insert data for routing variable " ,data);
var created_game = Session.get('CURRENT_GAME');
console.log("Session Game ", created_game);
Router.go('game_show', {game_id: created_game.game_id});
});
On this view, I try to load the document with the game_id I just inserted
Template.game_start.helpers({
game_info: function(){
console.log(this.game_id);
var game_data = GamesData.find({game_id: this.game_id});
console.log("trying to load via UUID ", game_data);
return game_data;
}
});
sorry cant upload images... :-(
https://www.evernote.com/shard/s21/sh/c07e8047-de93-4d08-9dc7-dae51668bdec/a8baf89a09e55f8902549e79f136fd45
As you can see from the image of the console log below, everything matches
the id logged before insert
the id logged in the insert callback using findOne()
the id passed in the url
However the mongo ID and the UUID I inserted ARE NOT THERE, the only document in there has all the other fields matching except those two!
Not sure what im doing wrong. Thanks!
The issue is your code is running on the client side (or at least it looks like from the screenshot).
In meteor, Meteor.methods that run on the client side are simulation stubs. What this means is you put stuff in there that creates 'fake' data so that you can avoid the user feeling latency. This is because it would take 1-4 seconds for the server to reply with what was actually inserted in the database. This isn't really an issue though.
The reason this causes you trouble is the method is run twice (one on the server and one on the client), so it generates two different Meteor.uuids since they are random. So this is why you have the inconsistency. What you see is the 'fake' one initially, then the server sends down the real one.
This is how Meteor makes it look like data has been inserted instantly, even though its not fully yet inserted.
To fix this get rid of the the .method you have on the client so that you only have one running on the server. You would need to get the game_id from the server though and not from the client.
If you want to keep the latency compensation, pass the Meteor.uuid in data like you do your other form data. This way the game_id will be consistent on both the server and client.