I have a list of multiples earthquakes registries (around 200, just put 2 for the sake of the example):
{
"EVENTS" : {
"-Yn6oKFQdn5s24R" : {
"event" : {
"date" : "22/04/18",
"time" : "10:01:45",
"place" : "Some place"
},
"timestamp" : "Mon Apr 23 2018 12:05:00 GMT-0600"
},
"-R96Yn6oKFQdn5s" : {
"event" : {
"date" : "23/04/18",
"time" : "11:02:45",
"place" : "Some place"
},
"timestamp" : "Mon Apr 23 2018 12:05:00 GMT-0600"
}
}
}
How this list works? (on my server)
Every time a new event is detected, is added to the list.
Sometimes, the event may has not been detected.
correctly and needs an update, or it needs to be deleted for some other
reason.
The list has a limit, and when new events are added, some other
events are deleted from the list
So, when I'm going to push the list on firebase for any reason the list on the server has changed, I don't want to look which registries has changed, or updated, instead I prefer to delete the complete list and populate it again.
The problem I'm thinking
What if after I delete the list on firebase, and when I'm pushing, some request fails? If this happen, my list on firebase would be empty.
What I have
I have made a code that first, save the existing keys, make the push. If some registry fails, delete the new keys added, and if not, delete the saved keys on the first step and keeps the new added.
So, my question
Is there some improved way in Firebase Real Time Database, in which I can deal with this situation? I have read about transactions, but I don't know if this can help.
Well, based on my specific case, I found that is more usefull to accomplish exaclty what I need, with .set() instead of .push().
Why?
If I do a .set() on the list, it'l overwrite the data that is already
there; it doesn't matter which registry has been changed, or if it's has been removed previously in my server for any reason, .set() keeps the order for me.
I don't have to make a .remove() of all the data in the list when I want to update it, because -as I said before- this method keeps the order in an awesome way.
Example of what I said:
firebase $add() .push() .set()
Bonus 1:
As a comment, I want to prevent to do not confuse .push().set() with .set(). Why? Because the former -in the Firebase JavaScript SDK- do the same than .push(). Look here
Bonus 2:
If you want to empty the list with .set(), you only need to pass a null for the new value, because doing that, is equivalent to calling .remove(). Awesome !
See more of all of this on the docs, and firebase.database.Reference for all the detailed methods.
Related
I have a collection whose documents look something like this:
count: number
first: timestamp
last: timestamp
The first value should (almost) never change after the document's creation.
In a batch write operation, I am trying to update documents in this collection, or create those documents that do not yet exist. Something like
batch.setData([
"count": FieldValue.increment(someInteger),
"first": someTimestamp,
"last": someTimestamp
], forDocument: someDocumentRef, mergeFields: ["count","last"])
My hope was that by excluding first from the mergeFields array, Firestore would set count and last by merging it into an existing document or making a new one, and set first only if it had no previous value (i.e., the document didn't exist before this operation). It is clear to me now that this is not the case, and instead first is completely ignored. Now I'm left wondering what the Firestore team intended for this situation.
I know that I could achieve this with a Transaction, but that doesn't tie in very well with my batch write. Are Transactions my only option, or is there a better way to achieve this?
I have created timestamps and other data in my documents and I handle this using separate create and update functions rather than trying to do it all at once.
The initial creation function includes the created date etc and then subsequent updates use the non-destructive update, so just omit any fields in the update payload you do not want to overwrite.
eg. to create:
batch.set(docRef, {created: someTimestamp, lastUpdate: someTimestamp})
then to update:
batch.update(docRef, {lastUpdate: someTimestamp, someOtherField: someData})
This will not overwrite the creationDate field or any other fields, but will create the someOtherField if it does not exist.
If you have a need to do a "only update existing fields" update after the document is created for the first time then currently you have to read the document first to find out if the fields exist and then create an update payload which will patch the only the desired fields. This can be done in a transaction or you can write this logic yourself, depending on your needs.
So, I know there are a few similarly named questions, but this is not the same.
I am curious to see if anyone could explain the reasoning for the lack of an increment sentinel, similar to the delete one.
As far as I know, a field deletion is no different than a document update. Meaning, I can just delete my field by simply updating the entire document to some new data, leaving that field out, hence, the question.
If we have a FieldValue.delete(), why the lack of a FieldValue.increment()
Note: I am aware of the 1QPS limit and I doubt it has anything to do with the above.
Regards!
Version 5.9.0 - Mar 14, 2019
Added FieldValue.increment(), which can be used in update() and
set(..., {merge:true}) to increment or decrement numeric field values
safely without transactions.
https://firebase.google.com/support/release-notes/js
So I have an application that involves giving an estimated wait time. I currently have my schema set up to have a waitTime value estimated off the count of the number of items in the collection. This works find.
The problem I'm having is that I need to be able to reduce the estimated wait time by 15 for every person in the database whenever someone is deleted from the database.
For instance say there are four people currently in the database. Each person has a respective assigned wait time of 15, 30, 45, and 60. Then lets say that the second person in the database is removed (i.e. they cancel their appointment). Then the two people who were after that second person need to have their estimated wait time updated to 30 and 45 minutes.
{
"_id": "qntsyc9RZqkbHnSGM",
"Name": "John",
"PhoneNumber": "5555555555",
"createdAt": "2017-04-05T05:05:46.024Z",
"currentStatus": "Waiting",
"waitTime": 30 //this is the value I want to reduce for every object in the database
}
How would I go about doing this?
P.S. This is basically creating an Index but I have had issues trying to create an Index for my db. I've tried using createIndex() and ensureIndex(), but have had no success (maybe I'm just doing it wrong). If there is a way to create an index for my db then I can work with that as well.
If you want to reduce the value by 15 for all of them, you'll want to use the $inc operator. It excepts negative values, so you can use it to increment or decrement.
The update would look something like this, just change the name of the collection:
db.epace.update({}, {$inc:{waitTime: -15}}, {multi:true})
if you need to decrease the waitime after the deleted record you need to update using $inc operator with negative value
db.queue.update({}, {$inc: {waitTime: -15 }},{ multi:true })
but this will update all the document even the first one so you have to update only those document after the deleted one not the previous one.
you can get those document easily using
db.queue.update(
{ created_at: { $gt: timestamp }},
{$inc: {waitingTime: -15 }},{multi:true})
MongoDB update operation is singular. Hence you need pass {multi:true} to update multiple records. Try this,
db.dbname.update({},{ $inc: {'waitTime':-15} }, {multi:true});//Mongo query
I have seen that to remove all items of a Mongo collection using JavaScript I should use :
DockerStats.remove(); //where DockerStats is my collection
So my goal is to purge the DB every 20sec so I did the following code :
setInterval(Meteor.bindEnvironment(function(){
DockerStats.remove();
console.log("ok")
}),20000);
But when I start the app I had +/- 1000items then despite the terminal wrote 2 times "ok" I still have more than 1000items so it doesn't work because even if I check right after the "ok" I have more than 1000items and the number is always growing up.
So maybe I'm removing the items with the wrong way ?
According to the docs, you need to pass in an empty object to delete the whole collection. So, the below would remove all students from the Students collection:
Students.remove({})
I think this is because if you want to remove everything and start over you would use the drop method and recreate it, which the docs says is more performant.
I am running a script on a large dataset to expand existing information. e.g:
...
{
id : 1234567890
},
{
id : 1234567891
},
...
becomes
...
{
id : 1234567890,
Name : "Joe"
},
{
id : 1234567891,
Name : "Bob"
},
...
I am doing this via the following code:
for(var cur in members)
{
curMember = members[cur];
// fetch account based on curMember.id to 'curAccount'
if(curAccount != null)
{
curMember.DisplayName = curAccount.DisplayName;
}
}
For the most part, this works as expected. However, once in a while (in the order of tens of thousands of entries), the result looks like this:
...
{
id : 1234567891,
Name : "Bob",
Name : "Bob"
},
...
I now have data which is in an invalid format and cannot be read by the DB, since duplicate property names doesn't make sense. It is occurring for random entries when the script is re-run, not the same ones every time. I need either a way to PREVENT this from happening, or to DETECT that it has happened so I can simply reprocess the entry. Anyone know what's going on here?
EDIT: After further investigation, the problem appears to occur only when the objects being modified come from a MongoDB query. It seems that if code explicitly sets a value to the same element name more than once, the field will be duplicated. All elements of the same name appear to be set to the most recently specified value. If it is only assigned once as in my original problem, it is only duplicated very rarely. I am using MongoDB 2.4.1.
Got it all figured out. MongoDB has a bug up to shell version 2.4.1 which allows duplicate element names to be set for query result objects. Version 2.4.3, released just this Monday, has a fix. See https://jira.mongodb.org/browse/SERVER-9066.
I don't really get your problem. If you apply identical property names to an object in ECMAscript, that property will just get overwritten. The construct in your snippet, can never be exist in that form on a live-object (excluding JSON strings).
If you just want to detect the attempt to create a property which is already there, you either need to have that object reference cached beforehand (so you can loop its keys) - or -
you need to apply ES5 strict mode.
"use strict";
at the top of your file or function. That will assure that your interpreter will throw an exception on the attempt to create two identical property keys. You can of course, use a try - catch statement to intercept that failure then.
Seems like you cannot intercept errors which get thrown because of strict mode violation.