So I have an application that involves giving an estimated wait time. I currently have my schema set up to have a waitTime value estimated off the count of the number of items in the collection. This works find.
The problem I'm having is that I need to be able to reduce the estimated wait time by 15 for every person in the database whenever someone is deleted from the database.
For instance say there are four people currently in the database. Each person has a respective assigned wait time of 15, 30, 45, and 60. Then lets say that the second person in the database is removed (i.e. they cancel their appointment). Then the two people who were after that second person need to have their estimated wait time updated to 30 and 45 minutes.
{
"_id": "qntsyc9RZqkbHnSGM",
"Name": "John",
"PhoneNumber": "5555555555",
"createdAt": "2017-04-05T05:05:46.024Z",
"currentStatus": "Waiting",
"waitTime": 30 //this is the value I want to reduce for every object in the database
}
How would I go about doing this?
P.S. This is basically creating an Index but I have had issues trying to create an Index for my db. I've tried using createIndex() and ensureIndex(), but have had no success (maybe I'm just doing it wrong). If there is a way to create an index for my db then I can work with that as well.
If you want to reduce the value by 15 for all of them, you'll want to use the $inc operator. It excepts negative values, so you can use it to increment or decrement.
The update would look something like this, just change the name of the collection:
db.epace.update({}, {$inc:{waitTime: -15}}, {multi:true})
if you need to decrease the waitime after the deleted record you need to update using $inc operator with negative value
db.queue.update({}, {$inc: {waitTime: -15 }},{ multi:true })
but this will update all the document even the first one so you have to update only those document after the deleted one not the previous one.
you can get those document easily using
db.queue.update(
{ created_at: { $gt: timestamp }},
{$inc: {waitingTime: -15 }},{multi:true})
MongoDB update operation is singular. Hence you need pass {multi:true} to update multiple records. Try this,
db.dbname.update({},{ $inc: {'waitTime':-15} }, {multi:true});//Mongo query
Related
I have a collection whose documents look something like this:
count: number
first: timestamp
last: timestamp
The first value should (almost) never change after the document's creation.
In a batch write operation, I am trying to update documents in this collection, or create those documents that do not yet exist. Something like
batch.setData([
"count": FieldValue.increment(someInteger),
"first": someTimestamp,
"last": someTimestamp
], forDocument: someDocumentRef, mergeFields: ["count","last"])
My hope was that by excluding first from the mergeFields array, Firestore would set count and last by merging it into an existing document or making a new one, and set first only if it had no previous value (i.e., the document didn't exist before this operation). It is clear to me now that this is not the case, and instead first is completely ignored. Now I'm left wondering what the Firestore team intended for this situation.
I know that I could achieve this with a Transaction, but that doesn't tie in very well with my batch write. Are Transactions my only option, or is there a better way to achieve this?
I have created timestamps and other data in my documents and I handle this using separate create and update functions rather than trying to do it all at once.
The initial creation function includes the created date etc and then subsequent updates use the non-destructive update, so just omit any fields in the update payload you do not want to overwrite.
eg. to create:
batch.set(docRef, {created: someTimestamp, lastUpdate: someTimestamp})
then to update:
batch.update(docRef, {lastUpdate: someTimestamp, someOtherField: someData})
This will not overwrite the creationDate field or any other fields, but will create the someOtherField if it does not exist.
If you have a need to do a "only update existing fields" update after the document is created for the first time then currently you have to read the document first to find out if the fields exist and then create an update payload which will patch the only the desired fields. This can be done in a transaction or you can write this logic yourself, depending on your needs.
I have a list of multiples earthquakes registries (around 200, just put 2 for the sake of the example):
{
"EVENTS" : {
"-Yn6oKFQdn5s24R" : {
"event" : {
"date" : "22/04/18",
"time" : "10:01:45",
"place" : "Some place"
},
"timestamp" : "Mon Apr 23 2018 12:05:00 GMT-0600"
},
"-R96Yn6oKFQdn5s" : {
"event" : {
"date" : "23/04/18",
"time" : "11:02:45",
"place" : "Some place"
},
"timestamp" : "Mon Apr 23 2018 12:05:00 GMT-0600"
}
}
}
How this list works? (on my server)
Every time a new event is detected, is added to the list.
Sometimes, the event may has not been detected.
correctly and needs an update, or it needs to be deleted for some other
reason.
The list has a limit, and when new events are added, some other
events are deleted from the list
So, when I'm going to push the list on firebase for any reason the list on the server has changed, I don't want to look which registries has changed, or updated, instead I prefer to delete the complete list and populate it again.
The problem I'm thinking
What if after I delete the list on firebase, and when I'm pushing, some request fails? If this happen, my list on firebase would be empty.
What I have
I have made a code that first, save the existing keys, make the push. If some registry fails, delete the new keys added, and if not, delete the saved keys on the first step and keeps the new added.
So, my question
Is there some improved way in Firebase Real Time Database, in which I can deal with this situation? I have read about transactions, but I don't know if this can help.
Well, based on my specific case, I found that is more usefull to accomplish exaclty what I need, with .set() instead of .push().
Why?
If I do a .set() on the list, it'l overwrite the data that is already
there; it doesn't matter which registry has been changed, or if it's has been removed previously in my server for any reason, .set() keeps the order for me.
I don't have to make a .remove() of all the data in the list when I want to update it, because -as I said before- this method keeps the order in an awesome way.
Example of what I said:
firebase $add() .push() .set()
Bonus 1:
As a comment, I want to prevent to do not confuse .push().set() with .set(). Why? Because the former -in the Firebase JavaScript SDK- do the same than .push(). Look here
Bonus 2:
If you want to empty the list with .set(), you only need to pass a null for the new value, because doing that, is equivalent to calling .remove(). Awesome !
See more of all of this on the docs, and firebase.database.Reference for all the detailed methods.
I was wondering how to implement lazy loading/more data on scroll with mongoose. I want to load 10 posts at a time, but I'm not sure how to best load the next 10 elements in a query.
I currently have:
var q = Post.find().sort("rating").limit(10);
To load the 10 posts with the highest "rating". How do i go about doing this for the next 10 posts?
The general concept of "paging" is to use .skip() which essentially "skips" over the results that have already been retrieved, so you can essentially do this:
var q = Post.find().sort( "rating" ).skip(10).limit(10);
But really, as you can imagine this is going to slow down considerably when you get a few "pages" in. So you really want something smarter. Essentially this is a "range query" where you want to grab higher (or lower if descending ) results than the last set of results retrieved. So given the last value of 5 then for greater than you would do:
var q = Post.find({ "rating": { "$gt": 5 } }).sort( "rating" ).limit(10);
Looks Okay, but really there is still a problem. What if the next "page" also contained results with a rating of 5? This query would skip over those and never display them.
The smart thing to do is "keep" all of the _id values from the document since they are unique keys. Basically apply the same sort of thing, except this time you make sure you are not including the results from the previous page in your new one. The $nin operator helps here:
var q = Post.find({ "rating": { "$gte": 5 }, "_id": { "$nin": seenIds } })
.sort( "rating" ).limit(10);
Whether the seenIds is just the last page of results or some more depends on the "density" of the value you are sorting on, and of course you need to "keep" these in a session variable or something.
But try to adapt this, as range queries are usually your best performance result.
I'm building an application in Node.js and MongoDB, and the application has something of time-valid data, meaning if some piece of data was inserted into the database.
I'd like to remove it from the database (via code) after three days (or any amount of days/time spread).
Currently, my solution is to have some sort of member in my Schema that checks when it was actually posted and subsequently removes it when the current time is past 3 days from the insertion, but I'm having trouble in figuring out a good way to write it in code.
Are there any standard ways to accomplish something like this?
There are two basic ways to accomplish this with a TTL index. A TTL index will let you define a special type of index on a BSON Date field that will automatically delete documents based on age. First, you will need to have a BSON Date field in your documents. If you don't have one, this won't work. http://docs.mongodb.org/manual/reference/bson-types/#document-bson-type-date
Then you can either delete all documents after they reach a certain age, or set expiration dates for each document as you insert them.
For the first case, assuming you wanted to delete documents after 1 hour you would create this index:
db.mycollection.ensureIndex( { "createdAt": 1 }, { expireAfterSeconds: 3600 } )
assuming you had a createdAt field that was a date type. MongoDB will take care of deleting all documents in the collection once they reach 3600 seconds (or 1 hour) old.
For the second case, you will create an index with expireAfterSeconds set to 0 on a different field:
db.mycollection.ensureIndex( { "expireAt": 1 }, { expireAfterSeconds: 0 } )
If you then insert a document with an expireAt field set to a date mongoDB will delete that document at that date and time:
db.mycollection.insert( {
"expireAt": new Date('June 6, 2014 13:52:00'),
"mydata": "data"
} )
You can read more detail about how to use TTL indexes here:
http://docs.mongodb.org/manual/tutorial/expire-data/
Does anyone have a good approach for a query against a collection for documents that are older than 30 seconds. I'm creating a cleanup worker that marks items as failed after they have been in a specific state for more than 30 seconds.
Not that it matters, but I'm using mongojs for this one.
Every document has a created time associated with it.
If you want to do this using mongo shell:
db.requests.find({created: {$lt: new Date((new Date())-1000*60*60*72)}}).count()
...will find the documents that are older than 72 hours ("now" minus "72*60*60*1000" msecs). 30 seconds would be 1000*30.
We are assuming you have a created_at or similar field in your document that has the time it was inserted or otherwise modified depending on which is important to you.
Rather than iterate over the results you might want to look at the multi option in update to apply your change to all documents that match your query. Setting the time you want to look past should be fairly straightforward
In shell syntax, which should be pretty much the same of the driver:
db.collection.update({
created_at: {$lt: time },
state: oldstate
},
{$set: { state: newstate } }, false, true )
The first false being for upserts which does not make any sense in this usage and the second true marking for multi document update.
If the documents are indeed going to be short lived and you have no other need for them afterwards, then you might consider capped collections. You can have a total size or time to live option for these and the natural insertion order favours processing of queued entries.
You could use something like that:
var d = new Date();
d.setSeconds(d.getSeconds() - 30);
db.mycollection.find({ created_at: { $lt: d } }).forEach(function(err, doc) {} );
The TTL option is also an elegant solution. It's an index that deletes documents automatically after x seconds, see here: https://docs.mongodb.org/manual/core/index-ttl/
Example code would be:
db.yourCollection.createIndex({ created:1 }, { expireAfterSeconds: 30 } )