Chaining multiple firebase actions and rollbacks - javascript

I'm using firebase and I want to chain some actions. Here is the scenario:
I want to add an item to the array and because I don't want to use push Id's I update a 'Last_Id' variable in firebase every time an item is added. I also update a 'Counter' variable to count the number of records (so I don't end up using numChildren() which can be slow).
The count and last_id variable are in the same tree like this:
Count:
---------->last_id
---------->Counter
I did this so that they can both be updated at the same time in a single transaction
So when I add an item I want 3 things to happen in order:
1- last_id is retreived
Item is added
last_id and Counter are
both updated
This is my code which makes use of promises.
add:function(ref,obj){
//get last_id
return baseRef.child('Count').child("Last_Id").once("value")
.then(function(snapshot){
return (snapshot.val()+1);
})
//add new data
.then(function(key){
return baseRef.child(ref).child(key).set(obj,function(error){
if (error)
console.log(error.code)
})
})
//update Count and last key
.then(this.updateCountAndKey(ref,1))
},
updateCountAndKey:function(ref,i){
return baseRef.child('Count').transaction(function(currentValue) {
if (currentValue!==null)
return {
Counter:(currentValue.Counter||0) +i,
Last_Id:(currentValue.Last_Id||0)+1
}
},function(err,commited,snap) {
if( commited )
console.log("updated counter to "+ snap.val());
else {
console.log("oh no"+err);
}
},false)
}
since I'm new to javascript and promises in particular want to know if this is a robust way of doing things. I also want to know how to do roll-backs if something goes wrong. so that if one thing fails then everything else fails (e.g if the update to Last_id and Counter fail then the item is not added).
Any help is much appreciated.

As the Firebase documentation specifies , transactions can only Atomically modify the data at this location, hence you can't use transactions to update other nodes in Firebase.
It is recommended to use push ID's (generated by Firebase in a safe way). This will remove the need to use a transaction for this part of your process. You will need to still use a transaction if you need to maintain the count. This should be done on success of #2 (adding an item).
Now your process will look like this:
push an item (auto generated ID)
on success, use a transaction to increment the count

Related

trying to fix the problems arising from asynchronous code in javascript

I am new in database systems and what I am trying to do is to check whether the e-mail entered by the user during login exists in the database or not. I use Firebase Databse. So, the code I have written is this:
function login(){
var e_mail = document.getElementById("e-mail").value;
rootRef = firebase.database().ref();
rootRef.orderByChild("E_mail").on("child_added", function(snapshot){
lst.push(snapshot.val().E_mail);
//console.log(lst);
})
console.log(lst);
}
let lst = [];
login_btn.onclick = function() {login()};
I want to fetch all e-mails from the database, add them in the list and then loop through that list. Maybe this is not the best way, but that's what I'm working on. I could also just say if (snapshot.val().E_mail == e_mail){alert("there is such a user");}but the problem I have encountered and want to deal with is not that, it's the "callback" function inside login function. When I console the list in the outer function it shows an empty list as it does not run the inner function until it is done with the outer one. I understand this. But how can I avoid or fix this. I want to get the full list of e-mails to be able to loop through it then. Also, I don't know how to end the "loop" in Firebase, because it is sort of looping when it gets the e-mails. So I would like to stop at the moment when it finds a matching e-mail.
You're downloading all users to see if one name exists already. That is a waste of bandwidth.
Instead you should use a query to match the email you're looking for, and only read that node:
rootRef.orderByChild("E_mail").equalTo(e_mail).once("value", function(snapshot){
if (snapshot.exists()) {
// A node with the requested email already exists
}
})
In general, if you need to process all nodes, you'll want to use a value event, which executes for all matching nodes at once. So to get all users from the database, add them to a list, and then do something with that list:
rootRef.orderByChild("E_mail").once("value", function(snapshot){
var list = [];
snapshot.forEach(function(childSnapshot) {
list.push(childSnapshot.val());
});
console.log(list); // this will display the populated array
})
Note that you won't be able to access the list outside of the callback. Even if you declare the variable outside of the callback, it will only be properly populated inside the callback. See Xufox' comment for a link explaining why that is.

atomic 'read-modify-write' in javascript

I'm developing an online store app, and using Parse as the back-end. The count of each item in my store is limited. Here is a high-level description of what my processOrder function does:
find the items users want to buy from database
check whether the remaining count of each item is enough
if step 2 succeeds, update remaining count
check if remaining count becomes negative, if it is, revert remaining count to the old value
Ideally, the above steps should be executed exclusively. I learned that Javascript is a single-threaded and event-based, so here are my questions:
no way in Javascript to put the above steps in a critical section, right?
assume only 3 items are left, and two users try to order 2 of them respectively. The remaining count will end up as -1 for one of the users, so remaining count needs to be reverted to 1 in this case. Imagine another user tries to order 1 item when the remaining count is -1, he will fail although he should be allowed to order. How do I solve this problem?
Following is my code:
Parse.Cloud.define("processOrder", function(request, response) {
Parse.Cloud.useMasterKey();
var orderDetails = {'apple':2, 'pear':3};
var query = new Parse.Query("Product");
query.containedIn("name", ['apple', 'pear']);
query.find().then(function(results) {
// check if any dish is out of stock or not
_.each(results, function(item) {
var remaining = item.get("remaining");
var required = orderDetails[item.get("name")];
if (remaining < required)
return Parse.Promise.error(name + " is out of stock");
});
return results;
}).then(function(results) {
// make sure the remaining count does not become negative
var promises = [];
_.each(results, function(item) {
item.increment("remaining", -orderDetails[item.get("name")]);
var single_promise = item.save().then(function(savedItem) {
if (savedItem.get("remaining") < 0) {
savedItem.increment("remaining", orderDetails[savedItem.get("name")]);
return savedItem.save().then(function(revertedItem) {
return Parse.Promise.error(savedItem.get("name") + " is out of stock");
}, function(error){
return Parse.Promise.error("Failed to revert order");
});
}
}, function(error) {
return Parse.Promise.error("Failed to update database");
});
promises.push(single_promise);
});
return Parse.Promise.when(promises);
}).then(function() {
// order placed successfully
response.success();
}, function(error) {
response.error(error);
});
});
no way in Javascript to put the above steps in a critical section, right?
See, here is the amazing part. In JavaScript everything runs in a critical section. There is no preemption and multiprocessing is cooperative. If your code started running there is simply no way any other code can run before yours completes.
That is, unless your code is done executing.
The problem is, you're doing IO, and IO in JavaScript yields back to the event loop before actually happening kind of like in blocking code. So when you create and run a query you don't actually continue running right away (that's what your callback/promise code is about).
Ideally, the above steps should be executed exclusively.
Sadly that's not a JavaScript problem, that's a host environment problem in this case Parse. This is because you have to explicitly yield control to the other code when you use their APIs (through callbacks and promises) and it is up to them to solve it.
Lucky for you, parse has atomic counters. From the API docs:
To help with storing counter-type data, Parse provides methods that atomically increment (or decrement) any number field. So, the same update can be rewritten as.
gameScore.increment("score");
gameScore.save();
There are also atomic array operations which you can use here. Since you can do step 3 atomically, you can guarantee that the counter represents the actual inventory.

jQuery Deferred returns only last value in loop

So I'm trying to go through one Firebase database to find entries in the database matching a criteria. Therefore I'm using the deferred object of jQuery to handle the database calls.
Once I get a return value from this first database I want to get the user info from a second database for each of those values in the first db. Then the results are added to a JSON array
so its:
<search for value, find one>
<<<search other db for oher info>>>
<continue search for outer value>
But this only returns one value - although everything else is running fine (and the console logs all the info correct).
Here's the code:
function find(searchLocation, profileID) {
var requestUserData = {
data: []
};
var def = $.Deferred();
//This will be executed as long as there are elements in the database that match the criteria and that haven't been loaded yet (so it's a simple loop)
Ref.orderByChild("location").equalTo(searchLocation).on("child_added", function(snapshot) {
def.ressolve(snapshot.val().ID);
});
return def.promise();
};
I hope you guys have any ideas on what to do or how I could solve this. Thanks in advance!
Edit: upon further testing I discovered that this problem already exists in the outer loop - so only the first value is being returned. I think this is related to the posission of the resolve() method but I didn't find a posibility on how to change this behaviour.
Firebase is a real-time database. The events stream as changes occur at the server. You're attempting to take this real-time model and force it into CRUD strategy and do a GET operation on the data. A better solution would be to simply update the values in real-time as they are modified.
See AngularFire, ReactFire, or BackboneFire for an example of how you can do this with your favorite bindings framework.
To directly answer the question, if you want to retrieve a static snapshot of the data, you want to use once() callback with a value event, not a real-time stream from child_added:
Ref.orderByChild("location").equalTo(searchLocation).once("value", function(snapshot) {
def.resolve(snapshot.val());
});

How to query orchestrate.io

I was searching for an easy and simple database for a little highscore system for a some games I'm developing in javascript.
I saw Orchestrate.io in github's student developer pack. I found a suitable drivermodule nodejs orchestrate and have integrated them.
The problem comes with querying orchestrate for my data. I have managed saving scores and querying them with db.list('collection'), but this seems to not responding with all data. It appered to me that some values are not returned.
I read about the db.search('collection','query') function. But I don't really understand how I could return all data because I don't want to query in a specific way.
My objects are as simple as follows:
{"name":"Jack","score":1337}
As I understand, one has to send a key, when putting such values to an orchestrate-collection. But I'd like to query the whole collection and get the values in a descendant order in regard to the score.
As for now I end up sorting the result on the client-side.
I hope you guys can give me some hints for a query that can sort for specific values!
You have the option to use a SearchBuilder
db.newSearchBuilder() //Build a search object
.collection('collection') //Set the collection to be searched
.sort(score, 'desc') //Set the order of the results
.query("*") //Empty search
.then(function (res) { //Callback function for results
//Do something with the results
})
Source
By default, .list uses a pagination limit of 10. You can either increase that, e.g.:
db.list('collection', { limit: 100 })
Or use .links, .links.next (from the docs):
db.list('collection', { limit: 10 })
.then(function (page1) {
// Got First Page
if (page1.links && page1.links.next) {
page1.links.next.get().then(function (page2) {
// Got Second Page
})
}
})

Self-triggered perpetually running Firebase process using NodeJS

I have a set of records that I would like to update sequentially in perpetuity. Basically:
Get least recently updated record
Update record
Set date of record to now (aka. send it to the back of the list)
Back to step 1
Here is what I was thinking using Firebase:
// update record function
var updateRecord = function() {
// get least recently updated record
firebaseOOO.limit(1).once('value', function(snapshot) {
key = _.keys(snapshot.val())[0];
/*
* do 1-5 seconds of non-Firebase processing here
*/
snapshot.ref().child(key).transaction(
// update record
function(data) {
return updatedData;
},
// update priority after commit (would like to do it in transaction)
function(error, committed, snap2) {
snap2.ref().setPriority(snap2.dateUpdated);
}
);
});
};
// listen whenever priority changes (aka. new item needs processing)
firebaseOOO.on('child_moved', function(snapshot) {
updateRecord();
});
// kick off the whole thing
updateRecord();
Is this a reasonable thing to do?
In general, this type of daemon is precisely what was envisioned for use with the Firebase NodeJS client. So, the approach looks good.
However, in the on() call it looks like you're dropping the snapshot that's being passed in on the floor. This might be application specific to what you're doing, but it would be more efficient to consume that snapshot in relation to the once() that happens in the updateRecord().

Categories

Resources