Leaderboard with firebase cloud functions - javascript

My current database structure looks like this,
Which basically has a mobile_users table and and a top 100 table which will be the leaderboard.
Im trying to figure out how to write a cloud function that executes every minute that updates/populates the top100 table with the userid, earned_points from mobile_users and sort it by earned_points.
Should i add a rank field on this table or is there a way to order the table from asc/desc order based on mobile_users?
My current function looks like this
exports.dbRefOnWriteEvent = functions.database.ref('/mobile_user/{userId}/{earned_points}').onWrite(event => {
var ref = admin.database().ref("/top100");
ref.orderByChild("earned_points").once("value", function(dataSnapshot) {
var i = 0;
dataSnapshot.forEach(function(childSnapshot) {
var r = (dataSnapshot.numChildren() - i);
childSnapshot.ref.update({rank: r},function(error) {
if (error != null)
console.log("update error: " + error);
});
i++;
});
});
});
I have yet to figure out how to tell the cloud function to execute every minute. I am having trouble structuring these type of queries.
My function is also failing to populate the the top100 table with those 3 current users. I would appreciate if someone could point me in the right direction.

Create a http request function that will do your business.
Then use cron-job to call your http firebase function every minute : cron-job

Maybe you can have two root nodes in your database. One like the above and a second node that is called leaderboard.
That second node can be an array where the index reflects the rank and the name reflects the score.
Leaderboard
|-- [0]
|-- score: 5000
|-- uid: 4zzdawqeasdasq2w1
|---[1]
|-- score: 4990
|-- uid: 889asdas1891sadaw
Then when you get a new score, you update the user's node and then also update the leaderboard. Then you just grab the uid and look up the name from the user's node.
Like the other posters said, use a HTTP Firebase Cloud Function and a chron job.
However I would recommend you use a chron job just to keep the cloud function alive (look up cold start firebase functions), but also make a fetch request to trigger the cloud function from the front-end everytime a user plays the game and generates a score. Otherwise if you get 10 plays per minute and it only updates once a minute, that will not be great user experience for the players who expect a real time leaderboard.

Related

Stripe / node.js : how retrieve stripe subscription safely + increment 1

I just started exploring Stripe for handling payments in node.js. I want to create a payment system that goes as following:
I create a workspace and I start a Stripe subscription of 10 dollars / month.
When someone joins my workspace it will cost me 10 dollar / month extra.
So, when I want to add a person to my subscription. How would I handle this? I found the below function, but I was wondering two things:
How do I add one person to this subscription? It now says quantity: 2, but how do I simply increment 1 with every user?
in the example below I use ID 'sub_6OZnwv0DZBrrPt' to retrieve this Stripe subscription. I was wondering from where I can get this ID? I could save this subscription ID in the workspace mongo database document after I created this subscription, but I'm not sure if it is safe to keep it like that in my database? Let me know if you have any suggestions.
this is the function to update a subscription
stripe.subscriptions.update(
'sub_6OZnwv0DZBryPt',
{ quantity: 2 },
(err, subscription) => {
// asynchronously called
}
);
On 1.) you need to retrieve the current subscription based on the stored ID. This can be done as:
stripe.subscriptions.retrieve(
"sub_6OZnwv0DZBryPt",
function(err, subscription) {
// asynchronously called
}
);
The subscription object will have information about the current quantity (see the example response and doc on the subscription object). Which leads to your second question.
To retrieve the subscription you need to store the ID only. It's safe to do so, the ID is meaningless to others unless they have your test / live keys as well. Be sure you secure those keys, and feel free to store IDs like subscription_id, customer_id, etc.

How to run multiple bots on one firebase database?

database = firebase.database();
var ref = database.ref('Users');
ref.orderByChild('state').equalTo("0").once('value', function(snapshot) {
var Users = snapshot.val();
i=0;
if (Object.keys(Users).length > 0){
getUser(Users);
} else {
console.log("No Users")
}
});
What I am doing is having a node js bot run through my database and search for users with state= 0. If state equals to zero, I run another script that goes and gets some information about them, updates their entry, and then changes the state to 1.
I have quite a large database, so it would be great if I could run a few instances of my bot. It won't work, though, because when the bots initially run, they all look at the same entries and remember which ones have a state = 0 and then they all repeat each other's work.
I tried changing the ref.orderByChild from using "once" to "on" child changed. That didn't seem to work though because it seemed as though the script was always waiting/listening for changes.. and never actually finished one loop. It does not move on to the next entry.
What's the best way to tackle something like this: having multiple bots being able to edit a firebase database without repeating each other's work?
Query and save all the data with a "master" script, then have it divvy up the entire thing and offload the split data to other scripts that receive their portion of data as input.

Nodejs Mongoose - Serve clients a single query result

I'm looking to implement a solution where I can query the Mongoose Database on a regular interval and then store the results to serve to my clients.
I'm assuming this will reduce my response time when my users pull the collection.
I attempted to implement this plan by creating an empty global object and then writing a function that queries the db and then stores the results as the global object mentioned previously. At the end of the function I setTimeout for 60 seconds and then ran the function again. I call this function the first time the server controller gets called when the app is first run.
I then set my clients up so that when they requested the collection, it would first look to see if the global object exists, and if so return that as the response. I figured this would cut my 7-10 second queries down to < 1 sec.
In my novice thinking I assumed that Nodejs being 'single-threaded' something like this could work quite well - but it just seemed to eat up all my RAM and cause fatal errors.
Am I on the right track with my thinking or is it better to query the db every time people pull the collection?
Here is the code in question:
var allLeads = {};
var getAllLeads = function(){
allLeads = {};
console.log('Getting All Leads...');
Lead.find().sort('-lastCalled').exec(function(err, leads) {
if (err) {
console.log('Error getting leads');
} else {
allLeads = leads;
}
});
setTimeout(function(){
getAllLeads();
}, 60000);
};
getAllLeads();
Thanks in advance for your assistance.

Speeding up an app that makes many Facebook API calls

I've got a simple app that fetches a user's complete feed from the Facebook API in order to tally the number of words he or she has written total on the site.
After he or she authenticates, the page makes a Graph call to /me/feed?limit100 and counts the number of responses and their dates. If there is a "next" cursor in the response, it then pings that next URL, which looks something like this:
https://graph.facebook.com/[UID]/feed?limit=100&until=1386553333
And so on, recursively, until we reach the time that the user joined Facebook. The function looks like this:
var words = 0;
var posts = function(callback, url) {
url = url || '/me/posts?limit=100';
FB.api(url, function(response) {
if (response.data) {
response.data.forEach(function(status) {
if (status.message) {
words += status.message.split(/ /g).length;
}
});
}
if (response.paging && response.paging.next) {
posts(callback, response.paging.next);
} else {
alert("You wrote " + words + " on Facebook!");
}
});
}
This works just fine for people who have posts a total of up to 4,000 statuses, but it really starts to crawl for power users with 10,000 lifetime updates or more. Each response from the API is only about 25Kb, but I cannot figure out what's straining the most.
After I've added the number of words in each status to my total word count, do I need to specifically destroy the response object so as not to overload memory?
Alternatively, is the recursion depth a problem? we're realistically talking about a total of 100 calls to the API for power users. I've experimented with upping the limit on each call to fetch larger chunks, but it doesn't seem to make a huge difference.
Thanks.
So, you're doing this with the JS SDK I guess, which mean this runs in the Browser... Did you try to run this in Chrome and then watch the network monitor to see about the response time etc.?
With 100 requests, this also means that the data object/JSON must be about the size of 2.5mb, which for some browsers/machines could be quite challenging I guess. Also, it must take quite a while to fetch the data from FB. What does the user see in the meantime?
Did you think of implementing this in the backend on the server side, and then just passing the results to the frontend?
For exmple use NodeJS together with SocketIO to do it on the server side and dynamically update the word count?

Self-triggered perpetually running Firebase process using NodeJS

I have a set of records that I would like to update sequentially in perpetuity. Basically:
Get least recently updated record
Update record
Set date of record to now (aka. send it to the back of the list)
Back to step 1
Here is what I was thinking using Firebase:
// update record function
var updateRecord = function() {
// get least recently updated record
firebaseOOO.limit(1).once('value', function(snapshot) {
key = _.keys(snapshot.val())[0];
/*
* do 1-5 seconds of non-Firebase processing here
*/
snapshot.ref().child(key).transaction(
// update record
function(data) {
return updatedData;
},
// update priority after commit (would like to do it in transaction)
function(error, committed, snap2) {
snap2.ref().setPriority(snap2.dateUpdated);
}
);
});
};
// listen whenever priority changes (aka. new item needs processing)
firebaseOOO.on('child_moved', function(snapshot) {
updateRecord();
});
// kick off the whole thing
updateRecord();
Is this a reasonable thing to do?
In general, this type of daemon is precisely what was envisioned for use with the Firebase NodeJS client. So, the approach looks good.
However, in the on() call it looks like you're dropping the snapshot that's being passed in on the floor. This might be application specific to what you're doing, but it would be more efficient to consume that snapshot in relation to the once() that happens in the updateRecord().

Categories

Resources