I'm using Firebase for a web app. It's written in plain Javascript using no external libraries.
I can "push" and retrieve data with '.on("child_added")', but '.remove()' does not work the way it says it should. According to the API,
"Firebase.remove() -
Remove the data at this Firebase location. Any data at child locations will also be deleted.
The effect of the delete will be visible immediately."
However, the remove is not occurring immediately; only when the entire script is done running. I need to remove and then use the cleared tree immediately after.
Example code:
ref = new Firebase("myfirebase.com") //works
ref.push({key:val}) //works
ref.on('child_added', function(snapshot){
//do stuff
}); //works
ref.remove()
//does not remove until the entire script/page is done
There is a similar post here but I am not using Ember libraries, and even so it seems like a workaround for what should be as simple as the API explains it to be.
The problem is that you call remove on the root of your Firebase:
ref = new Firebase("myfirebase.com")
ref.remove();
This will remove the entire Firebase through the API.
You'll typically want to remove specific child nodes under it though, which you do with:
ref.child(key).remove();
I hope this code will help someone - it is from official Google Firebase documentation:
var adaRef = firebase.database().ref('users/ada');
adaRef.remove()
.then(function() {
console.log("Remove succeeded.")
})
.catch(function(error) {
console.log("Remove failed: " + error.message)
});
To remove a record.
var db = firebase.database();
var ref = db.ref();
var survey=db.ref(path+'/'+path); //Eg path is company/employee
survey.child(key).remove(); //Eg key is employee id
Firebase.remove() like probably most Firebase methods is asynchronous, thus you have to listen to events to know when something happened:
parent = ref.parent()
parent.on('child_removed', function (snapshot) {
// removed!
})
ref.remove()
According to Firebase docs it should work even if you lose network connection. If you want to know when the change has been actually synchronized with Firebase servers, you can pass a callback function to Firebase.remove method:
ref.remove(function (error) {
if (!error) {
// removed!
}
}
As others have noted the call to .remove() is asynchronous. We should all be aware nothing happens 'instantly', even if it is at the speed of light.
What you mean by 'instantly' is that the next line of code should be able to execute after the call to .remove(). With asynchronous operations the next line may be when the data has been removed, it may not - it is totally down to chance and the amount of time that has elapsed.
.remove() takes one parameter a callback function to help deal with this situation to perform operations after we know that the operation has been completed (with or without an error). .push() takes two params, a value and a callback just like .remove().
Here is your example code with modifications:
ref = new Firebase("myfirebase.com")
ref.push({key:val}, function(error){
//do stuff after push completed
});
// deletes all data pushed so far
ref.remove(function(error){
//do stuff after removal
});
In case you are using axios and trying via a service call.
URL: https://react-16-demo.firebaseio.com/
Schema Name: todos
Key: -Lhu8a0uoSRixdmECYPE
axios.delete(`https://react-16-demo.firebaseio.com/todos/-Lhu8a0uoSRixdmECYPE.json`). then();
can help.
Related
I have developed some applications using Firebase. Over time, I realized that I am repeating a lot of unnecessary code and decided to create a small library to help me increase productivity. Right at the beginning, I tried to create this object in Javascript:
read.childRoot = function(att) {
var acessChildRoot = firebase.database().ref("root/");
acessChildRoot.once('value').then(function(snapshot) {
alert(snapshot.child("nome").val());
});
}
And I tried to access through this line of code:
alert(read.childRoot("nome"));
So I was able to read the reference I wanted, but the first return was the undefined value. How can I filter this value and just display the value I really want to see?
It seems that you want to wait for the first value to be set on a node.
In that case, I recommend using this snippet (from my gist):
var listener = ref.on('value', function(snapshot) {
if (snapshot.exists()) {
console.log('Value is now '+snapshot.val());
ref.off('value', listener);
}
});
I'm working on a small blog engine where the user can create blog entry and possible to link tags to an entry. It is many-to-many relation, but due to that Breeze cannot yet manage this relation I have to expose the join table to breeze so that I can persist the data step-by-step. And my problem is here.
Tables:
BlogEntry
BlogEntryTag
Tag
Scenario:
user opens the "new blog entry" form or selects an existing one to be edited
enters the text, etc
selects one or more tags
Business logic:
create a new entity by Breeze / query the selected one
save the blog entry (1st server call which gives back the blog_id if the blog entry is new one)
check the already existing connections between the tags and blog entry, if the blog entry is edited then the already existing blogEntry-tag relations might change ( 2nd server call)
based on the tag name selecting the tag_id from tag table (3rd server call)
create the BlogEntrytag entities by breeze
persist the BlogEntrytag entities into database ( 4th server call)
I think the order must be consecutive.
I have this code and as you can see the attached screenshot the console logging marked by '_blogEntryEnttity' does not wait until the data returns from the server and it will be executed before the console logging marked by '_blogEntryEnttity inside'. The code will throw a reference exception when it tries to set up the title property a few line later.
var blogEntryEntityQueryPromise = datacontext.blogentry.getById(_blogsObject.id);
blogEntryEntityQueryPromise.then(function (result)
{
console.log('result', result);
_blogEntryEntity = result[0];
console.log('_blogEntryEnttity inside', _blogEntryEntity);
//if I need synchronous execution then I have to put the code here which must be executed consecutively
});
console.log('_blogEntryEnttity', _blogEntryEntity);
}
//mapping the values we got
_blogEntryEntity.title = _blogsObject.title;
_blogEntryEntity.leadWithMarkup = _blogsObject.leadWithMarkup;
_blogEntryEntity.leadWithoutMarkup = _blogsObject.leadWithoutMarkup;
_blogEntryEntity.bodyWithMarkup = _blogsObject.bodyWithMarkup;
_blogEntryEntity.bodyWithoutMarkup = _blogsObject.bodyWithoutMarkup;
console.log('_blogEntryEnttity', _blogEntryEntity);
The example comes from here.
My question is that, why it is not wait until the data comes back? What is the way of handling cases like this?
However, I figured out that, if I need synchronous execution then I should place the code into the success method following the data retrieving from the promise. However, I really don't like this solution because my code will be ugly after a while and hard to maintain.
The datacontext.blogentry.getById looks like below and the implementation is in an abstract class, you can find the code below too. The whole repository pattern comes from John Papa's course on Pluralsight.
Repository class method
function getById(id)
{
return this._getById(this.entityName, id);
}
Abstract repository class method. According to Breeze's documentation page the EntityQuery class' execute method returns a Promise.
function _getById(resource, id) {
var self = this;
var manager = self.newManager;
var Predicate = breeze.Predicate;
var p1 = new Predicate('id', '==', id);
return EntityQuery.from(resource)
.where(p1)
.using(manager).execute()
.then(success).catch(_queryFailed);
function success(data) {
return data.results;
}
}
I appreciate your help in advance!
I don't think you need all these round trips. I'd do this:
Query all available Tag entities, so they'll be in the EntityManager's cache (you need these to populate the UI anyway).
If it's an existing BlogEntry, just query the BlogEntry and all its associated BlogEntryTag entities; Breeze will connect the BlogEntryTags to their associated Tags in the cache. You'll add/delete BlogEntryTags if the user selects/unselects Tags for the BlogEntry.
var query = EntityQuery.from("BlogEntries").where("id", "==", id).expand("BlogEntryTags");
If it's a new BlogEntry, it won't have any BlogEntryTags. You'll create these when you save, after the user selects some tags.
Save the added/updated BlogEntry and any added/deleted BlogEntryTag entities to the database in a single saveChanges call.
See the Presenting Many-to-Many doc and its associated plunker for a deeper dive. The UI is different from what you want, but the underlying concepts are useful.
why it is not wait until the data comes back?
Because promises don't magically synchronize execution. They're still asynchronous, they still rely on callbacks.
What is the way of handling cases like this?
You need to put the code that should wait in the then callback.
However, I really don't like this solution because my code will be ugly after a while and hard to maintain.
Not really, you can write concise and elegant asynchronous code with promises. If your code is becoming too much spaghetti, abstract parts of it in own functions. You should be able to get to a clean and flat promise chain.
I'm using Parse cloud code to update some counters on a user when after_delete is called on certain classes. A user has counters for subscriptions, followers and following that are incremented in the before_save for subscriptions and follows and decremented in the before_delete for the same classes.
The issue I'm running into is when a user is deleted. The after_delete function destroys all related subscriptions/follows, but this triggers an update to the (deleted) user via before_delete for subscriptions/follows. This always causes the before_delete to error out.
Perhaps I'm conceptually mixed up on the best way to accomplish this, but I can't figure out how to properly set up the following code in follow before_delete:
var fromUserPointer = follow.get("fromUser");
var toUserPointer = follow.get("toUser");
fromUserPointer.fetch().then( function(fromUser){
// update following counter
// if from user is already deleted, none of the rest of the promise chain is executed
}.then( function (fromUser){
return toUserPointer.fetch();
}.then( function(toUser){
// update followers count
}
Is there a way to determine if the fromUserPointer and toUserPointer point to a valid object short of actually performing the fetch?
Its not an error to not find the user, but by not handling the missing object case on the fetch, its being treating implicitly as an error.
So...
fromUserPointer.fetch().then(f(result) {
// good stuff
}).then(f(result) {
// good stuff
}).then(f(result) {
// good stuff
}, f(error) {
// this is good stuff too, if there's no mode of failure
// above that would cause you to want NOT to delete, then...
response.success();
});
in my server/server.js
Meteor.methods({
saveOnServer: function() {
var totalCount = Collections.find({
"some": "condition"
}).count();
if (totalCount) {
var customerId = Collections.update('someId', {
"$addToSet": {
objects: object
}
}, function(err) {
if (err) {
throw err;
} else {
return true;
}
});
} else {}
}
});
I'm afraid that when saveOnServer() is called by 2 clients at the same time, it will return the same totalCount for each client and basically end up inserting same integer number into object id. The end goal is to insert row on the server side with an atomic operation that only completes when the totalCount is successfully returned and the document is inserted ensuring that no duplicate id exists? I'm trying to not use the mongodb _id but have my own integer incrementing id column.
I'm wondering how I can ensure that a field gets auto-incremented for each insert operation? I am currently relying on getting the total count of documents. Is a race condition possible here? If so, what is the meteor way of dealing with this?
In Meteor's concurrency model, you can imagine a whole method as an uninterruptible block of stuff that happens. In order for Meteor to switch from running one method midway to say, starting another method, you need to "yield"—the method needs to signal, "I can be interrupted."
Methods yield whenever they do something asynchronous, which in practice means any time you do a database update or call a method with a callback in Meteor 0.6.5 and later. Since you give your update call a callback, Meteor will always try to do something in between the call to update and the update's callback. However, in Meteor 0.6.4.2 and earlier, database updates were uninterruptible regardless of the use of callbacks.
However, multiple calls to saveOnServer will happen in order and do not cause a race condition. You can call this.unblock() to allow multiple calls to saveOnServer to occur "simultaneously"—i.e., not share the same queue, labeled saveOnServer queue, of uninterruptible blocks of stuff.
Given the code you have, another method modifying Collections can change the value of count() between the call and the update.
You can prevent one method from making the other invalid midway by implementing the following data models:
saveOnServer : function () {
// ...
Collections.update({_id:someId, initialized:true, collectionCount: {$gt: 0}},
{$addToSet: {objects: object}});
///...
}
When adding objects to Collections:
insertObject: function() {
//...
var count = Collections.find({some: condition}).count();
Collections.insert({_id:someId, initialized:false, collectionCount: count});
Collections.update({initialized:false},
{$set:{initialized:true}, $inc: {collectionCount: 1}});
}
Note, while this may seem inefficient, it reflects the exact cost of making an update and insert in different methods behave the way you intend. In saveOnServer you cannot insert.
Conversely, if you removed the callback from Collections.update, it will occur synchronously and there will be no race conditioning Meteor 0.6.5 and later.
You can make this collection have a unique key on an index field, and then keep it updated as follows:
1) Whenever you insert into the collection, first do a query to get the maximum index and insert the document with index + 1.
2) To find out the number of documents just do the query to get the max of the index.
Insertion is now a pair of queries, a read and a write, so it can fail. (DB ops can always fail, though.) However, it can never leave the database in an inconsistent state - the Mongo index will guarantee that.
The syntax for building an index in Meteor is this:
MyCollection._ensureIndex('index', {unique: 1});
Another way to do this is from a mechanism hibernate/jpa follows - and that is to set up a collision field. Most of the time, this can be an update timestamp that is set on each update. Just prior to doing any update, query the update timestamp. Then you can specify the update where the update timestamp is what you just fetched. If it has changed in the interim, the update won't happen - and you check the return code/count that the row was updated or not.
JPA does this automatically for you when you add an annotation for this collision field - but this is essentially what it does in behind the scenes
I have a dojo.store.Memory wrapped in a dojo.data.ObjectStore which I am then plugging into a dataGrid. I want to delete an item from the store and have the grid update. I have tried every combonation I can think of with no success. For example:
var combinedStore = new dojo.data.ObjectStore({objectStore: new dojo.store.Memory({data: combinedItems})});
combinedStore.fetch({query:{id: 'itemId'}, onComplete: function (items) {
var item = items[0];
combinedStore.deleteItem(item);
combinedGrid.setStore(combinedStore);
}});
combinedGrid.setStructure(gridLayout);
This throws no errors but combinedStore.objectStore.data still has the item that was meant to be deleted and the grid still displays the item. (The also seems to be a complete mismatch between combinedStore.objectStore.data and combinedStore.objectStore.index);
There's a simple solution, luckily! The delete is successfully happening, however, you need to save the ObjectStore after the deletion for it to be committed.
Change your code to look like this:
onComplete: function (items) {
var item = items[0];
combinedStore.deleteItem(item);
combinedStore.save();
combinedGrid.setStore(combinedStore);
}
That little save should do the trick. (Please note: the save must occur after the deleteItem - if you put it outside the fetch block, do to being asynchronous, it will actually happen before the onComplete!)
Working example: http://pastehtml.com/view/b34z5j2bc.html (Check your console for results.)
This does seem rather poorly documented at present in the new dojo.store documentation.
The old dojo.data.api.Write documentation make it fairly clear. An excerpt from http://dojotoolkit.org/reference-guide/dojo/data/api/Write.html:
Datastores that implement the Write interface act as a two-phase
intermediary between the client and the ultimate provider or service
that handles the data. This allows for the batching of operations,
such as creating a set of new items and then saving them all back to
the persistent store with one function call.
The save API is defined as asynchronous. This is because most
datastores will be talking to a server and not all I/O methods for
server communication can perform synchronous operations.
Datastores track all newItem, deleteItem, and setAttribute calls on
items so that the store can both save the items to the persistent
store in one chunk and have the ability to revert out all the current
changes and return to a pristine (unmodified) data set.
Revert should only revert the store items on the client side back to
the point the last save was called.
dojo.store has evolved from dojo.data and seems to follow many of its behavioral aspects.
The new dojo.store documentation http://www.sitepen.com/blog/2011/02/15/dojo-object-stores/ and http://www.sitepen.com/blog/2011/02/15/dojo-object-stores/ manages to talk specifically about the delete operation without mentioning having to call save() (in fact I can't find the word 'save' on that page at all).
I'm staying away from dojo.store as long as possible, hopefully it will be easier to follow in 1.7 or later, whenever I'm forced to use it for real :)