I have an FS Collection in Meteor called "MyUploads". I will be performing some functionality on the files uploaded into the Collection, and then additional files will be created and subsequently added within MyUploads. I have created an event, in which this will take place, called #parseUploads. Within the event, and prior to the addition of the subsequent files added to MyUploads, I have created a variable:
var previousCount = fileCount;
which is responsible for storing the original count of documents that the user had added to the Collection. Then, the parsing function will perform on each of these documents, and add the newly parsed documents to the collection.
My question is: How do I loop through the Collection from the first document up through the previousCount's value document?
In other words, if the previousCount has a value of 3 (meaning, that the user had uploaded 3 documents), then after the parsing functionality has been performed, there will be 3 subsequent documents added to the collection. I then would like to know how I can loop through the Collection and delete only the first 3 documents, while leaving the 3 subsequent documents remaining in the Collection.
I would recommend adding a boolean field to the collection to act as a flag to denote parsed items. Once an item is parsed, you can update it.
Then you can remove items from the collection based on the presence of that flag.
MyCollection.remove({stale: true});
Hope that helps,
Elliott
Related
FIREBASE REAL-TIME DATABASE structure Image attachment
As you can see in structure, the online key node has sub-nodes with number and value as a Long string.
There are many sub-nodes(27,000).
when I use
firebase.database().ref().child('12321/Audio/060820/Online').once('value',sn=>{ //value event listener once //I need only top one key-value pair, i,e 132607-Data:....... })
this method loads all Online node which takes more than a minute But I need Efficient mode to get only a few latest entries.
You can use either limitToLast or limitToFirst`:
firebase.database().ref().child('12321/Audio/060820/Online').limitToFirst(10).once('value',sn=>{ //value event listener once //I need only top one key-value pair, i,e 132607-Data:....... })
https://firebase.google.com/docs/reference/js/firebase.database.Reference#limittofirst
I have a collection whose documents look something like this:
count: number
first: timestamp
last: timestamp
The first value should (almost) never change after the document's creation.
In a batch write operation, I am trying to update documents in this collection, or create those documents that do not yet exist. Something like
batch.setData([
"count": FieldValue.increment(someInteger),
"first": someTimestamp,
"last": someTimestamp
], forDocument: someDocumentRef, mergeFields: ["count","last"])
My hope was that by excluding first from the mergeFields array, Firestore would set count and last by merging it into an existing document or making a new one, and set first only if it had no previous value (i.e., the document didn't exist before this operation). It is clear to me now that this is not the case, and instead first is completely ignored. Now I'm left wondering what the Firestore team intended for this situation.
I know that I could achieve this with a Transaction, but that doesn't tie in very well with my batch write. Are Transactions my only option, or is there a better way to achieve this?
I have created timestamps and other data in my documents and I handle this using separate create and update functions rather than trying to do it all at once.
The initial creation function includes the created date etc and then subsequent updates use the non-destructive update, so just omit any fields in the update payload you do not want to overwrite.
eg. to create:
batch.set(docRef, {created: someTimestamp, lastUpdate: someTimestamp})
then to update:
batch.update(docRef, {lastUpdate: someTimestamp, someOtherField: someData})
This will not overwrite the creationDate field or any other fields, but will create the someOtherField if it does not exist.
If you have a need to do a "only update existing fields" update after the document is created for the first time then currently you have to read the document first to find out if the fields exist and then create an update payload which will patch the only the desired fields. This can be done in a transaction or you can write this logic yourself, depending on your needs.
When I insertOne() document in the collection Mongo add the document between the first and last document.
It is expected to be added at the end, right? But for me ideally would be to add the document at the beginning, for example as .unshift() JS works.
I'm building Blog so new posts should be added at the top of the list.
But I can always .reverse() of course .
Main problem is why document is added in the middle.
You should sort the collection by creation time(when you are trying to output them sorted by creation time) rather than trying to change to order of insertions in the collections.
db.posts.find().sort({creation_time: -1})
This way the recent entries will come out on top.
Ref: https://docs.mongodb.com/manual/reference/method/cursor.sort/#cursor.sort
Is there any methods or packages, that can help me add auto increments to existing collection? Internet full of information, about how to add AI before you create collection, but I did not find information on how to add AI when collection already exist...
MongoDB does not have an inbuilt auto-increment functionality.
Create a new collection to keep track of the last sequence value used for insertion:
db.createCollection("counter")
It will hold only one record as:
db.counter.insert({_id:"mySequence",seq_val:0})
Create a JavaScript function as:
function getNextSequenceVal(seq_id){
// find record with id seq_id and update the seq_val by +1
var sequenceDoc = db.counter.findAndModify({
query:{_id: seq_id},
update: {$inc:{seq_val:1}},
new:true
});
return sequenceDoc.seq_val;
}
To update all the already existing values in your existing collection, this should work (For the empty {}, you can place your conditions if you want to update some documents only):
db.myCollection.update({},
{$set:{'_id':getNextSequenceVal("mySequence")}},{multi:true})
Now you can insert new records into your existing collection as:
db.myCollection.insert({
"_id":getNextSequenceVal("mySequence"),
"name":"ABC"
})
MongoDB reserves the _id field in the top level of all documents as a primary key. _id must be unique, and always has an index with a unique constraint. It is an auto-incrementing field. However, it is possible to define your own auto-incrementing field following the tutorial in the MongoDB documentation.
Tutorial link: https://docs.mongodb.com/v3.0/tutorial/create-an-auto-incrementing-field/
I am having an extremely bizarre problem. I have a Backbone collection, and I am using the where method to find models in the collection that match a certain attribute. My problem is the inconsistency of the results.
I have a joinedGoalList which keeps track of goals that a user has joined. Let's say that this collection contains two goals with IDs of 1 and 3. When a user accesses /goals/3, a message should display saying that the user has joined the goal
I am having a problem where I am accessing /goals/3, and half the time the message displays, and the other half of the time, the message does not display.
The odd thing is that this problem only happens on my remote server and not on my local host.
In my code, I query the joinedGoalList for an ID of 3, and if it matches, I know that the matches array has to be greater than 0, so I render the message showing that the user has joined the goal.
Here is the code (joinedGoalList is a Backbone collection:
console.log(joinedGoalList);
var matches = joinedGoalList.where({id: this.model.get("id")});
console.log(matches);
console.log(matches.length);
if (matches.length > 0) {
console.log("the matches length is > 0");
this.renderLeaveGoal();
} else {
console.log("the matches length is 0");
this.renderJoinGoal();
}
Here are the results of console.log(joinedGoalList), here are the results(they are consistent):
child
_byCid: Object
_byId: Object
_callbacks: Object
length: 2
models: Array[2]
__proto__: ctor
As you can see, the length is 2. One of the objects has an ID of 1 and the other object has an ID of 3. This is consistent throughout the page loads.
The inconsistency occurs when I do a match on the array for an object with an ID of 3. Some page loads find the match while other page loads do not find the match.
The results of console.log(matches.length) are either 0 or 1 on my remote server, yet on my localhost, the results are always 1.
I'm pretty sure that the sequence of events goes like this:
You call fetch on the collection to load your data from the server.
You call console.log(joinedGoalList), this is asynchronous in some browsers.
You call joinedGoalList.where and find an empty collection.
The fetch call from 1 returns and populates the collection.
The console.log call from 2 executes and prints out the populated collection, this call will have a reference to joinedGoalList and that reference will now be pointing at a populated collection.
When you do this locally, the AJAX fetch in 4 returns quite quickly so step 4 occurs before 3 and everything behaves the way you'e expecting it to.
You have a couple options here:
fetch has a success callback:
The options hash takes success and error callbacks which will be passed (collection, response) as arguments.
So you could use the success callback to delay whatever is calling where until the server has responded and the collection is populated.
fetch resets the collection:
When the model data returns from the server, the collection will reset.
and reset will
replace a collection with a new list of models (or attribute hashes), triggering a single "reset" event at the end.
So you could listen for the "reset" event and use that event to trigger whatever is calling where.