Firebase cloud function onUpdate is triggered but doesn't execute as wanted - javascript

I'm currently making a web application with React front-end and Firebase back-end. It is an application for a local gym and consists of two parts:
A client application for people who train at the local gym
A trainer application for trainers of the local gym
The local gym offers programs for companies. So a company takes out a subscription, and employees from the company can train at the local gym and use the client application. It is important that the individual progress of the company employees is being tracked as well as the entire progress (total number of kilograms lost by all the employees of company x together).
In the Firestore collection 'users' every user document has the field bodyweight. Whenever a trainer fills in a progress form after a physical assessment for a specific client, the bodyweight field in the user document of the client gets updated to the new bodyweight.
In Firestore there is another collection 'companies' where every company has a document. My goal is to put the total amount of kilograms lost by the employees of the company in that specific document. So every time a trainer updates the weight of an employee, the company document needs to be updated. I've made a cloud function that listens to updates of a user's document. The function is listed down below:
exports.updateCompanyProgress = functions.firestore
.document("users/{userID}")
.onUpdate((change, context) => {
const previousData = change.before.data();
const data = change.after.data();
if (previousData === data) {
return null;
}
const companyRef = admin.firestore.doc(`/companies/${data.company}`);
const newWeight = data.bodyweight;
const oldWeight = previousData.bodyweight;
const lostWeight = oldWeight > newWeight;
const difference = diff(newWeight, oldWeight);
const currentWeightLost = companyRef.data().weightLostByAllEmployees;
if (!newWeight || difference === 0 || !oldWeight) {
return null;
} else {
const newCompanyWeightLoss = calcNewCWL(
currentWeightLost,
difference,
lostWeight
);
companyRef.update({ weightLostByAllEmployees: newCompanyWeightLoss });
}
});
There are two simple functions in the cloud function above:
const diff = (a, b) => (a > b ? a - b : b - a);
const calcNewCWL = (currentWeightLost, difference, lostWeight) => {
if (!lostWeight) {
return currentWeightLost - difference;
}
return currentWeightLost + difference;
};
I've deployed the cloud function to Firebase to test it, but I can't get it to work. The function triggers whenever the user document is updated, but it doesn't update the company document with the new weightLostByAllEmployees value. It is the first time for me using Firebase cloud functions, so big change it is some sort of rookie mistake.

Your current solution has some bugs in it that we can squash.
Always false equality check
You use the following equality check to determine if the data has not changed:
if (previousData === data) {
return null;
}
This will always be false as the objects returned by change.before.data() and change.after.data() will always be different instances, even if they contain the same data.
Company changes are never handled
While this could be a rare, maybe impossible event, if a user's company was changed, you should remove their weight from the total of the original company and add it to the new company.
In a similar vein, when a employee leaves a company or deletes their account, you should remove their weight from the total in a onDelete handler.
Handling floating-point sums
In case you didn't know, floating point arithmetic has some minor quirks. Take for example the sum, 0.1 + 0.2, to a human, the answer is 0.3, but to JavaScript and many languages, the answer is 0.30000000000000004. See this question & thread for more information.
Rather than store your weight in the database as a floating point number, consider storing it as an integer. As weight is often not a whole number (e.g. 9.81kg), you should store this value multiplied by 100 (for 2 significant figures) and then round it to the nearest integer. Then when you display it, you either divide it by 100 or splice in the appropriate decimal symbol.
const v = 1201;
console.log(v/100); // -> 12.01
const vString = String(v);
console.log(vString.slice(0,-2) + "." + vString.slice(-2) + "kg"); // -> "12.01kg"
So for the sum, 0.1 + 0.2, you would scale it up to 10 + 20, with a result of 30.
console.log(0.1 + 0.2); // -> 0.30000000000000004
console.log((0.1*100 + 0.2*100)/100); // -> 0.3
But this strategy on its own isn't bullet proof because some multiplications still end up with these errors, like 0.14*100 = 14.000000000000002 and 0.29*100 = 28.999999999999996. To weed these out, we round the multiplied value.
console.log(0.01 + 0.14); // -> 0.15000000000000002
console.log((0.01*100 + 0.14*100)/100); // -> 0.15000000000000002
console.log((Math.round(0.01*100) + Math.round(0.14*100))/100) // -> 0.15
You can compare these using:
const arr = Array.from({length: 100}).map((_,i)=>i/100);
console.table(arr.map((a) => arr.map((b) => a + b)));
console.table(arr.map((a) => arr.map((b) => (a*100 + b*100)/100)));
console.table(arr.map((a) => arr.map((b) => (Math.round(a*100) + Math.round(b*100))/100)));
Therefore we can end up with these helper functions:
function sumFloats(a,b) {
return (Math.round(a * 100) + Math.round(b * 100)) / 100;
}
function sumFloatsForStorage(a,b) {
return (Math.round(a * 100) + Math.round(b * 100));
}
The main benefit of handling the weights this way is that you can now use FieldValue#increment() instead of a full blown transaction to shortcut updating the value. In the rare case that two users from the same company have an update collision, you can either retry the increment or fall back to the full transaction.
Inefficient data parsing
In your current code, you make use of .data() on the before and after states to get the data you need for your function. However, because you are pulling the user's entire document, you end up parsing all the fields in the document instead of just what you need - the bodyweight and company fields. You can do this using DocumentSnapshot#get(fieldName).
const afterData = change.after.data(); // parses everything - username, email, etc.
const { bodyweight, company } = afterData;
in comparison to:
const bodyweight = change.after.get("bodyweight"); // parses only "bodyweight"
const company = change.after.get("company"); // parses only "company"
Redundant math
For some reason you are calculating an absolute value of the difference between the weights, storing the sign of difference as a boolean and then using them together to apply the change back to the total weight lost.
The following lines:
const previousData = change.before.data();
const data = change.after.data();
const newWeight = data.bodyweight;
const oldWeight = previousData.bodyweight;
const lostWeight = oldWeight > newWeight;
const difference = diff(newWeight, oldWeight);
const currentWeightLost = companyRef.data().weightLostByAllEmployees;
const calcNewCWL = (currentWeightLost, difference, lostWeight) => {
if (!lostWeight) {
return currentWeightLost - difference;
}
return currentWeightLost + difference;
};
const newWeightLost = calcNewCWL(currentWeightLost, difference, lostWeight);
could be replaced with just:
const newWeight = change.after.get("bodyweight");
const oldWeight = change.before.get("bodyweight");
const deltaWeight = newWeight - oldWeight;
const currentWeightLost = companyRef.get("weightLostByAllEmployees") || 0;
const newWeightLost = currentWeightLost + deltaWeight;
Rolling it all together
exports.updateCompanyProgress = functions.firestore
.document("users/{userID}")
.onUpdate(async (change, context) => {
// "bodyweight" is the weight scaled up by 100
// i.e. "9.81kg" is stored as 981
const oldHundWeight = change.before.get("bodyweight") || 0;
const newHundWeight = change.after.get("bodyweight") || 0;
const oldCompany = change.before.get("company");
const newCompany = change.after.get("company");
const db = admin.firestore();
if (oldCompany === newCompany) {
// company unchanged
const deltaHundWeight = newHundWeight - oldHundWeight;
if (deltaHundWeight === 0) {
return null; // no action needed
}
const companyRef = db.doc(`/companies/${newCompany}`);
await companyRef.update({
weightLostByAllEmployees: admin.firestore.FieldValue.increment(deltaHundWeight)
});
} else {
// company was changed
const batch = db.batch();
const oldCompanyRef = db.doc(`/companies/${oldCompany}`);
const newCompanyRef = db.doc(`/companies/${newCompany}`);
// remove weight from old company
batch.update(oldCompanyRef, {
weightLostByAllEmployees: admin.firestore.FieldValue.increment(-oldHundWeight)
});
// add weight to new company
batch.update(newCompanyRef, {
weightLostByAllEmployees: admin.firestore.FieldValue.increment(newHundWeight)
});
// apply changes
await db.batch();
}
});
With transaction fallbacks
In the rare case where you get a write collision, this variant falls back to a traditional transaction to reattempt the change.
/**
* Increments weightLostByAllEmployees in all documents atomically
* using a transaction.
*
* `arrayOfCompanyRefToDeltaWeightPairs` is an array of company-increment pairs.
*/
function transactionIncrementWeightLostByAllEmployees(db, arrayOfCompanyRefToDeltaWeightPairs) {
return db.runTransaction((transaction) => {
// get all needed documents, then add the update for each to the transaction
return Promise
.all(
arrayOfCompanyRefToDeltaWeightPairs
.map(([companyRef, deltaWeight]) => {
return transaction.get(companyRef)
.then((companyDocSnapshot) => [companyRef, deltaWeight, companyDocSnapshot])
})
)
.then((arrayOfRefWeightSnapshotGroups) => {
arrayOfRefWeightSnapshotGroups.forEach(([companyRef, deltaWeight, companyDocSnapshot]) => {
const currentValue = companyDocSnapshot.get("weightLostByAllEmployees") || 0;
transaction.update(companyRef, {
weightLostByAllEmployees: currentValue + deltaWeight
})
});
});
});
}
exports.updateCompanyProgress = functions.firestore
.document("users/{userID}")
.onUpdate(async (change, context) => {
// "bodyweight" is the weight scaled up by 100
// i.e. "9.81kg" is stored as 981
const oldHundWeight = change.before.get("bodyweight") || 0;
const newHundWeight = change.after.get("bodyweight") || 0;
const oldCompany = change.before.get("company");
const newCompany = change.after.get("company");
const db = admin.firestore();
if (oldCompany === newCompany) {
// company unchanged
const deltaHundWeight = newHundWeight - oldHundWeight;
if (deltaHundWeight === 0) {
return null; // no action needed
}
const companyRef = db.doc(`/companies/${newCompany}`);
await companyRef
.update({
weightLostByAllEmployees: admin.firestore.FieldValue.increment(deltaHundWeight)
})
.catch((error) => {
// if an unexpected error, just rethrow it
if (error.code !== "resource-exhausted")
throw error;
// encountered write conflict, fall back to transaction
return transactionIncrementWeightLostByAllEmployees(db, [
[companyRef, deltaHundWeight]
]);
});
} else {
// company was changed
const batch = db.batch();
const oldCompanyRef = db.doc(`/companies/${oldCompany}`);
const newCompanyRef = db.doc(`/companies/${newCompany}`);
// remove weight from old company
batch.update(oldCompanyRef, {
weightLostByAllEmployees: admin.firestore.FieldValue.increment(-oldHundWeight)
});
// add weight to new company
batch.update(newCompanyRef, {
weightLostByAllEmployees: admin.firestore.FieldValue.increment(newHundWeight)
});
// apply changes
await db.batch()
.catch((error) => {
// if an unexpected error, just rethrow it
if (error.code !== "resource-exhausted")
throw error;
// encountered write conflict, fall back to transaction
return transactionIncrementWeightLostByAllEmployees(db, [
[oldCompanyRef, -oldHundWeight],
[newCompanyRef, newHundWeight]
]);
});
}
});

There are several points to adapt in your Cloud Function:
Do admin.firestore() instead of admin.firestore
You cannot get the data of the Company document by doing companyRef.data(). You must call the asynchronous get() method.
Use a Transaction when updating the Company document and return the promise returned by this transaction (see here for more details on this key aspect).
So the following code should do the trick.
Note that since we use a Transaction, we actually don't implement the recommendation of the second bullet point above. We use transaction.get(companyRef) instead.
exports.updateCompanyProgress = functions.firestore
.document("users/{userID}")
.onUpdate((change, context) => {
const previousData = change.before.data();
const data = change.after.data();
if (previousData === data) {
return null;
}
// You should do admin.firestore() instead of admin.firestore
const companyRef = admin.firestore().doc(`/companies/${data.company}`);
const newWeight = data.bodyweight;
const oldWeight = previousData.bodyweight;
const lostWeight = oldWeight > newWeight;
const difference = diff(newWeight, oldWeight);
if (!newWeight || difference === 0 || !oldWeight) {
return null;
} else {
return admin.firestore().runTransaction((transaction) => {
return transaction.get(companyRef).then((compDoc) => {
if (!compDoc.exists) {
throw "Document does not exist!";
}
const currentWeightLost = compDoc.data().weightLostByAllEmployees;
const newCompanyWeightLoss = calcNewCWL(
currentWeightLost,
difference,
lostWeight
);
transaction.update(companyRef, { weightLostByAllEmployees: newCompanyWeightLoss });
});
})
}
});

Related

How do I query a firebase collection by the 'in' paramater with more than 10 values?

I have this function, getFollowingPieces() , where I get posts with userIds that the logged in user follows. Although now I've found it won't work if they follow more than 10 people. The only solution I've seen tells people to use a get request for each userId, but that doesn't allow for pagination or proper order of data. (Notice I'm using an after value and an orderBy) I'd hate to have to move off firebase for this limitation but the following users is a big aspect of this app. Here's my function below:
export async function getFollowingPieces(userId, following, filter, after) {
const order = filter === "Popular" ? "likeCount" : "dateCreated";
const result = await firebase
.firestore()
.collection("pieces")
.where("userId", "in", following.slice(0, 10))
.where("published", "==", true)
.orderBy(order, "desc")
.limit(10)
.get();
const last = result.docs[result.docs.length - 1];
const userFollowedPieces = result.docs.map((piece) => ({
...piece.data(),
docId: piece.id,
}));
const piecesWithUserDetails = await Promise.all(
userFollowedPieces.map(async (piece) => {
let userLikedPiece = false;
let userBookmarkedPiece = false;
if (piece.likes.includes(userId)) {
userLikedPiece = true;
}
if (piece.bookmarks.includes(userId)) {
userBookmarkedPiece = true;
}
const user = await getUserByUserId(piece.userId);
const { username, picture, fullName } = user[0];
return {
username,
picture,
fullName,
...piece,
userLikedPiece,
userBookmarkedPiece,
};
})
);
return { piecesWithUserDetails, last };
}
With the way Cloud Firestore indexes work, these limits are in place to discourage inefficient querying. In your use case, there are two ways that come to mind that could perform the desired query.
In either case, you will see performance benefits in bandwidth and latency by executing them using a Callable Cloud Function.
Of the two approaches, I recommend the first, as it is guaranteed to return relevant results if they exist.
Approach 1: Chunked requests
In this approach, you split the following array into blocks of 10 authors, make requests for each block and zip the sorted results together.
Max. documents retrieved:
(Math.ceil(AUTHOR_COUNT / 10) * 10) documents
Pros:
Works well for small number of followed authorsUses index-based queryingWill return related documents regardless of age/popularity
Cons:
Wasteful document requests with large followed authors size
/**
* An intermediate state of processing a `PieceData` object used for sorting.
* #typedef PieceMetadata
*
* #property {String} docId - the piece's document ID
* #property {QueryDataSnapshot} snapshot - the piece's document snapshot
* #property {Number | undefined} likeCount - the piece's value for `likeCount`
* #property {Number | undefined} dateCreated - the piece's value for `dateCreated`
*/
/**
* Returns the `pageSize` most recent pieces (as `PieceMetadata`
* objects) sorted according to the given `order` field for the
* given array of `authors`.
*
* `authors` must have less than 10 entries or you must use
* `_getPublishedPiecesByBatchOfAuthors()` instead.
*
* Optionally, `after` can be provided as a `DataSnapshot` or
* appropriate value for `order` to support paginated results.
*/
function _getPublishedPiecesByAuthors(authors, order, pageSize, after = undefined) {
const query = firebase
.firestore()
.collection("pieces")
.where("userId", "in", authors)
.where("published", "==", true)
.orderBy(order, "desc")
.limit(pageSize);
const result = (typeof after !== "undefined" ? query.startAfter(after) : query)
.get();
const pieceMetadataArr = [];
result.forEach((piece) => {
pieceMetadataArr.push({
docId: piece.id, // document ID
[filter]: piece.get(filter), // likeCount/dateCreated as appropriate
snapshot: piece // unprocessed snapshot
});
});
return pieceMetadataArr;
}
/** splits array `arr` into chunks of max size `n` */
function chunkArr(arr, n) {
if (n <= 0) throw new Error("n must be greater than 0");
return Array
.from({length: Math.ceil(arr.length/n)})
.map((_, i) => arr.slice(n*i, n*(i+1)))
}
/**
* Returns the `pageSize` most recent pieces (as `PieceMetadata`
* objects) sorted according to the given `order` field for the
* given array of `authors`.
*
* `authors` may have as many entries as desired.
*
* Optionally, `after` can be provided as a `DataSnapshot` or
* appropriate value for `order` to support paginated results.
*/
async function _getPublishedPiecesByBatchOfAuthors(authors, order, pageSize, after = undefined) {
return Promise.all(
chunkArr(authors, 10)
.map(authorsInChunk => _getPublishedPiecesByAuthors(authorsInChunk, order, pageSize, after))
)
.then(resultBatches => {
return resultBatches.flat()
.sort((a,b) => a[order] - b[order]) // works if dateCreated is numeric
.slice(0, pageSize); // only return first X results
})
}
export async function getFollowingPieces(userId, following, filter, after = undefined) {
const order = filter === "Popular" ? "likeCount" : "dateCreated";
const sortedPieceMetadata = await _getPublishedPiecesByBatchOfAuthors(following, order, 10, after);
const userFollowedPieces = sortedPieceMetadata // "hydrate" the pieces from their snapshot object
.map(({ docId, snapshot }) => ({
...snapshot.data(),
docId
}));
const lastSnapshot = sortedPieces.length > 0
? sortedPieces[sortedPieces.length-1].snapshot
: undefined;
const piecesWithUserDetails = await Promise.all(
userFollowedPieces.map(/* ... */)
);
return { piecesWithUserDetails, lastSnapshot };
}
Approach 2: Iterate /pieces
In this variation, you search each document in /pieces that matches the base query, and pick out those that match the user's followed authors list.
Max. documents retrieved:
maxSearchCount documents
Pros:
Works well for large number of active/popular followed authors
Cons:
Wasteful document requests with small followed authors sizeWasteful document requests with inactive/unpopular authorsMay hit query limit before getting desired number of documents for inactive/unpopular authorsRequires client-based/function-based filtering
async function findFromQuery(query, predicate, count, pageSize, maxSearchCount, after = undefined) {
if (!query || !("orderBy" in query))
throw new TypeError("query must be a Firestore Query or CollectionReference");
if (typeof predicate !== "function")
throw new TypeError("predicate must be a function, was " + typeof predicate);
if (typeof count !== "number")
throw new TypeError("count must be a number, was " + typeof count);
if (typeof pageSize !== "number")
throw new TypeError("pageSize must be a number, was " + typeof pageSize);
if (typeof maxSearchCount !== "number")
throw new TypeError("maxSearchCount must be a number, was " + typeof maxSearchCount);
const pageQuery = query.limit(pageSize);
const baseQuery = after
? pageQuery.startAfter(after)
: pageQuery;
const docs = [];
let searched = 0, lastSnapshot = undefined;
while (docs.length < count && searched < maxSearchCount) {
const querySnapshot = await (lastSnapshot
? pageQuery.startAfter(lastSnapshot).get()
: baseQuery.get();
querySnapshot.forEach((doc) => {
if (predicate(doc)) {
docs.push(doc);
}
lastSnapshot = doc;
});
searched += querySnapshot.size();
}
// may have more than `count` results, so only return that many
return docs.slice(0, count);
}
export async function getFollowingPieces(userId, following, filter, after = undefined) {
const order = filter === "Popular" ? "likeCount" : "dateCreated";
const query = firebase
.firestore()
.collection("pieces")
.orderBy(order, "desc");
const foundPieceSnapshots = findFromQuery(query, (doc) => {
const author = doc.get("userId");
return following.includes(author);
}, 10, 10, 1000, after);
const userFollowedPieces = foundPieceSnapshots.map(piece => ({
...piece.data(),
docId: piece.id
}));
const lastSnapshot = foundPieceSnapshots[foundPieceSnapshots.length-1];
const piecesWithUserDetails = await Promise.all(
userFollowedPieces.map(/* ... */)
);
return { piecesWithUserDetails, lastSnapshot };
}

members map with a limit per page

how could I make a member limit on a page? for example: only 10 members would appear on the first page, and to see the second page you would have to react with ⏩
const { MessageEmbed } = require('discord.js');
module.exports.run = async (client, message, args) => {
const role = message.mentions.roles.first() || message.guild.roles.cache.get(args[0]) || message.guild.roles.cache.find(r => r.name === args.slice(0).join(" "));
const embed = new MessageEmbed()
.setTitle(`Members with a role`)
.addFields(
{ name: 'alphabetical list', value: `\`\`\`fix\n${message.guild.roles.cache.get(role.id).members.map(m => m.user.tag.toUpperCase()).sort().join('\n') || 'none'}\`\`\``}
)
return message.channel.send(embed);
}
I would get the list of users as an array, then use slice to return a portion of the array. In your case I would do:
//Get a list of all user tags
const list = msg.guild.roles.cache.get(role.id).members.map(m => m.user.tag.toUpperCase()).sort();
//Let the user define the starting page
var pageNum = (parseInt(args[0]) * 10) - 10;
//Set a default option
if (!pageNum) {
pageNum = 0;
};
//Get 10 members, starting at the defined page
//Ex: if args[0] was "2", it would give you entries 10-19 of the array
var userList = list.slice(pageNum, pageNum + 9).join("\n");
Now that you can get users based off of a page number, you just need a way to set it! createReactionCollector is what you're looking for in this case. The discordjs.guide website has a great example of this that we can modify to fit our needs:
//Only respond to the two emojis, and only if the member who reacted is the message author
const filter = (reaction, user) => ["◀️", "▶️"].includes(reaction.emoji.name) && user.id === msg.author.id;
//Setting the time is generally a good thing to do, so that your bot isn't constantly waiting for new reactions
//It's set to 2 minutes in this case, which should be plenty of time
const collector = msg.createReactionCollector(filter, {
time: 120000
});
collector.on('collect', (reaction, user) => {
//Do stuff here
});
//We can just return when the reactor ends, send a message that the time is up, whatever we want!
collector.on('end', collected => {
return msg.channel.send("I'm done looking for reactions on the message!");
});
Now that we can get users and await reactions, we only need to put everything together. I would put the list retrieval in a seperate function that you can call easily:
//Initially take the page number from user input if requested
var page = parseInt(args[0]);
if (!page) {
page = 1;
};
//Send the message in a way that lets us edit it later
const listMsg = await msg.channel.send("This is what will be reacted to!");
//React in order
await listMsg.react("◀️");
await listMsg.react("▶️");
const filter = (reaction, user) => ["◀️", "▶️"].includes(reaction.emoji.name) && user.id === msg.author.id;
const collector = listMsg.createReactionCollector(filter, {
time: 120000
});
collector.on('collect', (reaction, user) => {
reaction.emoji.reaction.users.remove(user.id);
switch (reaction.emoji.name) {
case "◀️":
//Decrement the page number
--page;
//Make sure we don't go back too far
if (page < 1) {
page = 1;
};
listMsg.edit(getUsers(page));
break;
case "▶️":
//Increment the page number
++page;
listMsg.edit(getUsers(page));
break;
};
});
collector.on('end', collected => {
return msg.channel.send("I'm done looking for reactions on the message!");
});
function getUsers(n) {
const list = msg.guild.roles.cache.get(role.id).members.map(m => m.user.tag.toUpperCase()).sort();
//Take the page from the function params
var pageNum = (n * 10) - 10;
if (!pageNum) {
pageNum = 0;
};
return list.slice(pageNum, pageNum + 9).join("\n");
};
That's pretty much it! Obviously you'll have to tweak this to fit your own bot, but this code should be a great starting point.

Wait for all Firebase data query requests before executing code

I am trying to fetch data from different collections in my cloud Firestore database in advance before I process them and apply them to batch, I created two async functions, one to capture the data and another to execute certain code only after all data is collected, I didn't want the code executing and creating errors before the data is fetched when i try to access the matchesObject after the async function to collect data is finished, it keeps saying "it cannot access a property matchStatus of undefined", i thought took care of that with async and await? could anyone shed some light as to why it is undefined one moment
axios.request(options).then(function(response) {
console.log('Total matches count :' + response.data.matches.length);
const data = response.data;
var matchesSnapshot;
var marketsSnapshot;
var tradesSnapshot;
var betsSnapshot;
matchesObject = {};
marketsObject = {};
tradesObject = {};
betsObject = {};
start();
async function checkDatabase() {
matchesSnapshot = await db.collection('matches').get();
matchesSnapshot.forEach(doc => {
matchesObject[doc.id] = doc.data();
console.log('matches object: ' + doc.id.toString())
});
marketsSnapshot = await db.collection('markets').get();
marketsSnapshot.forEach(doc2 => {
marketsObject[doc2.id] = doc2.data();
console.log('markets object: ' + doc2.id.toString())
});
tradesSnapshot = await db.collection('trades').get();
tradesSnapshot.forEach(doc3 => {
tradesObject[doc3.id] = doc3.data();
console.log('trades object: ' + doc3.id.toString())
});
betsSnapshot = await db.collection('bets').get();
betsSnapshot.forEach(doc4 => {
betsObject[doc4.id] = doc4.data();
console.log('bets object: ' + doc4.id.toString())
});
}
async function start() {
await checkDatabase();
// this is the part which is undefined, it keeps saying it cant access property matchStatus of undefined
console.log('here is matches object ' + matchesObject['302283']['matchStatus']);
if (Object.keys(matchesObject).length != 0) {
for (let bets of Object.keys(betsObject)) {
if (matchesObject[betsObject[bets]['tradeMatchId']]['matchStatus'] == 'IN_PLAY' && betsObject[bets]['matched'] == false) {
var sfRef = db.collection('users').doc(betsObject[bets]['user']);
batch11.set(sfRef, {
accountBalance: admin.firestore.FieldValue + parseFloat(betsObject[bets]['stake']),
}, {
merge: true
});
var sfRef = db.collection('bets').doc(bets);
batch12.set(sfRef, {
tradeCancelled: true,
}, {
merge: true
});
}
}
}
});
There are too many smaller issues in the current code to try to debug them one-by-one, so this refactor introduces various tests against your data. It currently won't make any changes to your database and is meant to be a replacement for your start() function.
One of the main differences against your current code is that it doesn't unnecessarily download 4 collections worth of documents (two of them aren't even used in the code you've included).
Steps
First, it will get all the bet documents that have matched == false. From these documents, it will check if they have any syntax errors and report them to the console. For each valid bet document, the ID of it's linked match document will be grabbed so we can then fetch all the match documents we actually need. Then we queue up the changes to the user's balance and the bet's document. Finally we report about any changes to be done and commit them (once you uncomment the line).
Code
Note: fetchDocumentById() is defined in this gist. Its a helper function to allow someCollectionRef.where(FieldPath.documentId(), 'in', arrayOfIds) to take more than 10 IDs at once.
async function applyBalanceChanges() {
const betsCollectionRef = db.collection('bets');
const matchesCollectionRef = db.collection('matches');
const usersCollectionRef = db.collection('users');
const betDataMap = {}; // Record<string, BetData>
await betsCollectionRef
.where('matched', '==', false)
.get()
.then((betsSnapshot) => {
betsSnapshot.forEach(betDoc => {
betDataMap[betDoc.id] = betDoc.data();
});
});
const matchDataMap = {}; // Record<string, MatchData | undefined>
// betIdList contains all IDs that will be processed
const betIdList = Object.keys(betDataMap).filter(betId => {
const betData = betDataMap[betId];
if (!betData) {
console.log(`WARN: Skipped Bet #${betId} because it was falsy (actual value: ${betData})`);
return false;
}
const matchId = betData.tradeMatchId;
if (!matchId) {
console.log(`WARN: Skipped Bet #${betId} because it had a falsy match ID (actual value: ${matchId})`);
return false;
}
if (!betData.user) {
console.log(`WARN: Skipped Bet #${betId} because it had a falsy user ID (actual value: ${userId})`);
return false;
}
const stakeAsNumber = Number(betData.stake); // not using parseFloat as it's too lax
if (isNaN(stakeAsNumber)) {
console.log(`WARN: Skipped Bet #${betId} because it had an invalid stake value (original NaN value: ${betData.stake})`);
return false;
}
matchDataMap[matchId] = undefined; // using undefined because its the result of `doc.data()` when the document doesn't exist
return true;
});
await fetchDocumentsById(
matchesCollectionRef,
Object.keys(matchIdMap),
(matchDoc) => matchDataMap[matchDoc.id] = matchDoc.data()
);
const batch = db.batch();
const queuedUpdates = 0;
betIdList.forEach(betId => {
const betData = betDataMap[betId];
const matchData = matchDataMap[betData.tradeMatchId];
if (matchData === undefined) {
console.log(`WARN: Skipped /bets/${betId}, because it's linked match doesn't exist!`);
continue;
}
if (matchData.matchStatus !== 'IN_PLAY') {
console.log(`INFO: Skipped /bets/${betId}, because it's linked match status is not "IN_PLAY" (actual value: ${matchData.matchStatus})`);
continue;
}
const betRef = betsCollectionRef.doc(betId);
const betUserRef = usersCollectionRef.doc(betData.user);
batch.update(betUserRef, { accountBalance: admin.firestore.FieldValue.increment(Number(betData.stake)) });
batch.update(betRef, { tradeCancelled: true });
queuedUpdates += 2; // for logging
});
console.log(`INFO: Batch currently has ${queuedUpdates} queued`);
// only uncomment when you are ready to make changes
// batch.commit();
}
Usage:
axios.request(options)
.then(function(response) {
const data = response.data;
console.log('INFO: Total matches count from API:' + data.matches.length);
return applyBalanceChanges();
}

Get author user ID using Parse.Query not current user

I have two parse classes, User and Place.
If user ads a place, user is added as Pointer to the Place user column.
In order to list all places and determine how many places has a user, i use the following query:
loadTotalPointsDetail(params: any = {}): Promise<Place[]> {
const page = params.page || 0;
const limit = params.limit || 100;
const query = new Parse.Query(Place);
query.equalTo('user', Parse.User.current());
query.skip(page * limit);
query.limit(limit);
query.include('category');
query.include('user');
query.doesNotExist('deletedAt');
return query.find();
}
Filtering by Parse.User.current()) i will get current user places.
If i don't filter by Parse.User.current()) it will return all places as objects, containing all data.
How can i filter by place real author / user? not current (loggedIn)?
loadTotalPointsDetail(params: any = {}): Promise<Place[]> {
const page = params.page || 0;
const limit = params.limit || 100;
const query = new Parse.Query(Place);
const user = new Parse.User();
user.id = 'The id of the user that you want to search for';
query.equalTo('user', user);
query.skip(page * limit);
query.limit(limit);
query.include('category');
query.include('user');
query.doesNotExist('deletedAt');
return query.find();
}
I'll post the solution here, not the most indicated but it works for me:
async loadData() {
try {
const places = await this.placeService.loadTotalPointsDetail(this.params);
const placeU = await this.placeService.loadPlaceU(this.getParams().id);
for (const place of places) {
this.places.push(place);
}
let u = placeU.user.id;
let totalUserPlaces = places.filter(x => x.user.id == u);
if (totalUserPlaces) {
/* The total returned by reduce() will be stored in myTotal */
const myTotal = totalUserPlaces.reduce((total, place) => {
/* For each place iterated, access the points field of the
current place being iterated and add that to the current
running total */
return total + place.points;
}, 0); /* Total is initally zero */
this.points = myTotal;
} else {}
} catch (err) {
const message = await this.getTrans('ERROR_NETWORK');
this.showToast(message);}}
so i'm loading two separate queries:
const places = await this.placeService.loadTotalPointsDetail(this.params); -> Gets all posts listings
const placeU = await this.placeService.loadPlaceU(this.getParams().id); --> Gets the ID for current post
I extract the user post ID:
let u = placeU.user.id;
I filter using that user in order to get his posts:
let totalUserPlaces = places.filter(x => x.user.id == u);

Limit number of records in firebase

Every minute I have a script that push a new record in my firebase database.
What i want is delete the last records when length of the list reach a fixed value.
I have been through the doc and other post and the thing I have found so far is something like that :
// Max number of lines of the chat history.
const MAX_ARDUINO = 10;
exports.arduinoResponseLength = functions.database.ref('/arduinoResponse/{res}').onWrite(event => {
const parentRef = event.data.ref.parent;
return parentRef.once('value').then(snapshot => {
if (snapshot.numChildren() >= MAX_ARDUINO) {
let childCount = 0;
let updates = {};
snapshot.forEach(function(child) {
if (++childCount <= snapshot.numChildren() - MAX_ARDUINO) {
updates[child.key] = null;
}
});
// Update the parent. This effectively removes the extra children.
return parentRef.update(updates);
}
});
});
The problem is : onWrite seems to download all the related data every time it is triggered.
This is a pretty good process when the list is not so long. But I have like 4000 records, and every month it seems that I screw up my firebase download quota with that.
Does anyone would know how to handle this kind of situation ?
Ok so at the end I came with 3 functions. One update the number of arduino records, one totally recount it if the counter is missing. The last one use the counter to make a query using the limitToFirst filter so it retrieve only the relevant data to remove.
It is actually a combination of those two example provided by Firebase :
https://github.com/firebase/functions-samples/tree/master/limit-children
https://github.com/firebase/functions-samples/tree/master/child-count
Here is my final result
const MAX_ARDUINO = 1500;
exports.deleteOldArduino = functions.database.ref('/arduinoResponse/{resId}/timestamp').onWrite(event => {
const collectionRef = event.data.ref.parent.parent;
const countRef = collectionRef.parent.child('arduinoResCount');
return countRef.once('value').then(snapCount => {
return collectionRef.limitToFirst(snapCount.val() - MAX_ARDUINO).transaction(snapshot => {
snapshot = null;
return snapshot;
})
});
});
exports.trackArduinoLength = functions.database.ref('/arduinoResponse/{resId}/timestamp').onWrite(event => {
const collectionRef = event.data.ref.parent.parent;
const countRef = collectionRef.parent.child('arduinoResCount');
// Return the promise from countRef.transaction() so our function
// waits for this async event to complete before it exits.
return countRef.transaction(current => {
if (event.data.exists() && !event.data.previous.exists()) {
return (current || 0) + 1;
} else if (!event.data.exists() && event.data.previous.exists()) {
return (current || 0) - 1;
}
}).then(() => {
console.log('Counter updated.');
});
});
exports.recountArduino = functions.database.ref('/arduinoResCount').onWrite(event => {
if (!event.data.exists()) {
const counterRef = event.data.ref;
const collectionRef = counterRef.parent.child('arduinoResponse');
// Return the promise from counterRef.set() so our function
// waits for this async event to complete before it exits.
return collectionRef.once('value')
.then(arduinoRes => counterRef.set(arduinoRes.numChildren()));
}
});
I have not tested it yet but soon I will post my result !
I also heard that one day Firebase will add a "size" query, that is definitely missing in my opinion.

Categories

Resources