Firestore transactions showing unexpected behaviour when used in cloud functions - javascript

I am writing an app that features an inventory in which users can reserve products. I want to ensure that 2 users cannot simultaneously reserve a product at the same time, for this, I intend on using transactions. When using transactions from the Firebase SDK, everything works as intended, but I am getting unexpected behavior when using transactions from a callable cloud function. To simulate the use case where 2 users happen to reserve the same product, I use setTimeout in my cloud function to halt the function for 3 seconds. I am launching this function from 2 different clients with different user contexts.
export const reserveProduct = functions.https.onCall(async (data,context) => {
function testTimeout(){
return new Promise((resolve,reject) => {
setTimeout(()=> {
return resolve(true)
},3000)
})
}
if(!context.auth){
return {
error: `You must be logged in to reserve products`
}
}else{
const productRef = admin.firestore().collection('products').doc(data.productID)
const userRef = admin.firestore().collection('users').doc(context.auth.uid)
return admin.firestore().runTransaction((transaction) => {
return transaction.get(productRef).then(async(doc) => {
if(doc.get('status') == 'reserved'){
throw "Document already reserved!"
}else{
console.log("Product not reserved, reserving now!")
}
await testTimeout()
transaction.update(productRef, {status: 'reserved'});
transaction.update(userRef, {reserved: admin.firestore.FieldValue.arrayUnion(data.productID)})
})
}).then(() => {
console.log("Transaction Successfully committed !")
}).catch((error) => {
throw "Transaction failed, product already reserved"
})
}
After running this function call from 2 different clients simultaneously, The function call from my first client returns successfully as expected, but only after roughly 35s (which is way too long for the simplicity of the transaction). However, the second function call times out without returning any value. I have not seen any documentation explicitly stating the use of transactions in callable cloud functions, nor should it be affected when used within the emulator.
I am expecting to simply get a return value for whichever function call is able to modify the data first, and catch the error from the function which has retried and validated the reserved state.
Any help would be appreciated, thanks!

One major difference between the two places is in the way the SDKs used handle transactions:
The client-side SDKs use an optimistic compare-and-set approach for transactions, meaning that they pass the values you read in the transaction with the data you're writing. The server then only writes the new data if the documents you read haven't been updated.
The server-side SDKs (used in your Cloud Function) use a more traditional pessimistic approach for transactions, and place a lock on each document that you read in the transaction.
You can read more about database contention in the SDKs in the documentation.
While I'm not exactly certain how this is affecting your code, I suspect it is relevant to the difference in behavior you're seeing between the client-side and server-side implementations.

Related

Would giving response to client while letting asynchronous operation continue to run a good idea?

So I need to implement an "expensive" API endpoint. Basically, the user/client would need to be able to create a "group" of existing users.
So this "create group" API would need to check that each users fulfill the criteria, i.e. all users in the same group would need to be from the same region, same gender, within an age group etc. This operation can be quite expensive, especially since there are no limit on how many users in one group, so its possible that the client requests group of 1000 users for example.
My idea is that the endpoint will just create entry in database and mark the "group" as pending, while the checking process is still happening, then after its completed, it will update the group status to "completed" or "error" with error message, then the client would need to periodically fetch the status if its still pending.
My implementation idea is something along this line
const createGroup = async (req, res) => {
const { ownerUserId, userIds } = req.body;
// This will create database entry of group with "pending" status and return the primary key
const groupId = await insertGroup(ownerUserId, 'pending');
// This is an expensive function which will do checking over the network, and would take 0.5s per user id for example
// I would like this to keep running after this API endpoint send the response to client
checkUser(userIds)
.then((isUserIdsValid) => {
if (isUserIdsValid) {
updateGroup(groupId, 'success');
} else {
updateGroup(groupId, 'error');
}
})
.catch((err) => {
console.error(err);
updateGroup(groupId, 'error');
});
// The client will receive a groupId to check periodically whether its ready via separate API
res.status(200).json({ groupId });
};
My question is, is it a good idea to do this? Do I missing something important that I should consider?
Yes, this is the standard approach to long-running operations. Instead of offering a createGroup API that creates and returns a group, think of it as having an addGroupCreationJob API that creates and returns a job.
Instead of polling (periodically fetching the status to check whether it's still pending), you can use a notification API (events via websocket, SSE, webhooks etc) and even subscribe to the progress of processing. But sure, a check-status API (via GET request on the job identifier) is the lowest common denominator that all kinds of clients will be able to use.
Did I not consider something important?
Failure handling is getting much more complicated. Since you no longer create the group in a single transaction, you might find your application left in some intermediate state, e.g. when the service crashed (due to unrelated things) during the checkUser() call. You'll need something to ensure that there are no pending groups in your database for which no actual creation process is running. You'll need to give users the ability to retry a job - will insertGroup work if there already is a group with the same identifier in the error state? If you separate the group and the jobs into independent entities, do you need to ensure that no two pending jobs are trying to create the same group? Last but not least you might want to allow users to cancel a currently running job.

What is the best way to handle two async calls that must both pass and make an "irreversible" change?

I am currently wondering about this issue, I have a team to which I want to add a user (so write a new user to the database for that team) and I also want to increase the amount of users that team needs to pay for (I use stripe subscriptions).
async handleNewUser(user, teamId){
await addUserToTeamInDatabase(user, teamId)
await incrementSubscriberQuantityInStripe(teamId)
}
The problem is, which one do I do first? I recently ran into an issue where users were being added but the subscriber count was not increasing. However, if I reverse them and increment first and then write to database and something goes wrong in this last part, the client pays more but does not get a new member added. One possible way of approaching this is with try catch:
handleNewUser(user, teamId) {
let userAddedToDatabase = false
let userAddedInStripe = false
try {
await addUserToTeamInDatabase(user, teamId)
userAddedToDatabase = true
await incrementSubscriberQuantityInStripe(teamId)
userAddedToStripe = true
} catch (error) {
if (userAddedToDatabase && !userAddedInStripe) {
await removeUserFromTeamInDatabase()
}
}
}
So I'm writing the new user to the database and then making a call to the stripe API.
Is there a better way to approach this because it feels clumsy. Also, is there a pattern to address this problem or a name for it?
I'm using Firebase realtime database.
Thanks everyone!
What you want to perform is a transaction. In databases a transaction is a group of operations that is successful if all of its operations are successful. If at least one operation fails, no changes are made (all the other operations are cancelled or rolled back).
And Realtime Database supports transactions! Check the documentation
If both operations would be in the same database you'd normally bundle them in a transaction and the DB will revert to initial state if one of them fails. In your case you have operations in different external systems (DB & Stripe) so you'll have to implement the transactional logic yourself.
You could simplify your example by checking where the error comes from in the catch clause. Then you can get rid of the flags. Something like this:
handleNewUser(user, teamId) {
try {
await addUserToTeamInDatabase(user, teamId)
await incrementSubscriberQuantityInStripe(teamId)
} catch (error) {
// If we fail to increment subscriber in Stripe,
// cancel transaction by removing user from DB
if (instanceof error StripeError) {
await removeUserFromTeamInDatabase(user, teamId)
}
// Re-throw error upstream
throw error;
}
}
I use instanceof but you you change the conditional logic to fit your program.

Firebase Realtime Database - Determine if user has access to path

I have updated my Firebase Realtime Database access rules, and have noticed some clients now tries to access paths they do not have access to. This is ok - but my problem is that my code stops after being unable to read a restricted node.
I see below error in my console, and then loading of subsequent data stops:
permission_denied at /notes/no-access-node
I begin by collecting access nodes from /access_notes/uid and continue to read all data from /notes/noteId.
My code for collecting notes from the database below:
//*** SUBSCRIPTION */
database.ref(`access_notes/${uid}`).on('value', (myNotAccessSnaps) => {
let subscrPromises = []
let collectedNots = {}
// Collect all categories we have access to
myNotAccessSnaps.forEach((accessSnap) => {
const noteId = accessSnap.key
subscrPromises.push(
database.ref(`notes/${noteId}`)
.once('value', (notSnap)=>{
const notData = notSnap.val()
const note = { id: notSnap.key, ...notData}
collectedNotes[note.id] = note
},
(error) => {
console.warn('Note does not exist or no access', error)
})
)
})
Promise.all(subscrPromises)
.then(() => {
const notesArray = Object.values(collectedNotes)
...
})
.catch((error) => { console.error(error); return Promise.resolve(true) })
I do not want the client to halt on permission_denied!
Is there a way to see if the user has access to a node /notes/no_access_note without raising an error?
Kind regards /K
I do not want the client to halt on permission_denied!
You're using Promise.all, which MDN documents as:
Promise.all() will reject immediately upon any of the input promises rejecting.
You may want to look at Promise.allSettled(), which MDN documents as:
[Promise.allSettled()] is typically used when you have multiple asynchronous tasks that are not dependent on one another to complete successfully, or you'd always like to know the result of each promise.
Is there a way to see if the user has access to a node /notes/no_access_note without raising an error?
As far as I know the SDK always logs data access permissions errors and this cannot be suppressed.
Trying to access data that the user doesn't have access to is considered a programming error in Firebase. In normal operation you code should ensure that it never encounters such an error.
This means that your data access should follow the happy path of accessing data it knows it has access to. So you store the list of the notes the user has access to, and then from that list access each individual note.
So in your situation I'd recommend finding out why you're trying to read a note the user doesn't have access to, instead of trying to hide the message from the console.

Firebase concurrent read/write

I use Firebase transactions to get compare value and then update the value of a document in a collection.
but when I use the same data and send the query at the same time, they both read the same value so the check I do pass for both, and I have a bad result a the end.
the field is decremented twice.
my transaction is:
let docRef = db.collection('X').doc('SF');
let reduceValue = 20;
let transaction = db.runTransaction(t => {
return t.get(docRef)
.then(doc => {
let value = doc.data().y ;
if (value >= reduceValue) {
t.update(cptRef, { y:FieldValue.increment(-reduceValue) });
return Promise.resolve('Y increased');
} else {
return Promise.reject('sorry y is ');
}
});
}).then(result => {
console.log('Transaction success', result);
}).catch(err => {
console.log('Transaction failure:', err);
});
Thanks.
If I understand correctly your question, what you are describing is the correct behaviour of a Firestore Transaction executed with one of the Firestore Client SDKs:
You are calling twice a function that aims at decrementing a counter;
At the end, the counter is decremented twice.
The fact that your two calls are executed "at the same time" should not change the result: The counter should be decremented twice.
This is exactly what the Transaction ensures: In the case of a concurrent edit, Cloud Firestore runs the entire transaction again, ensuring that the initial doc (db.collection('X').doc('SF') in your case) has not changed during the Transaction. If it is the case, it will retry the operation with the new data.
This is because the Client SDKs use optimistic concurrency for Transactions and that, consequently, there is no locking of the documents.
I suggest you watch the following official video which explains that in detail.
You will also see in this video that for Transactions executed from a back-end (e.g. using the Admin SDK) the mechanism is not the same: they implement pessimistic concurrency (locking documents during transactions).

How to reset Delay exchange rabbimq if it's already exist

I am trying to check if current exchange exists or not, if it exists I want to deleted and create again. When I do not use try catch and try to delete not existed exchange I lost connection and get an error, how to create an index if does not exist after I try to delete?
export const resetDelayedExchange = (connection ,expectedMessages) => async (message, type) => {
const exchange = getExchange(type)
const channel = await connection.createChannel()
const cleanupDelayedExchange = `${exchange}.${delayedExchange}`
const restoreOnFailure = e => {
channel.assertExchange(cleanupDelayedExchange, delayedExchangeType, {durable: true, arguments: {[delayedMessageType]: 'direct'}})
channel.bindExchange(exchange, cleanupDelayedExchange)
logger.silly(`publishing message to ${cleanupDelayedExchange}`)
channel.publish(cleanupDelayedExchange, '', expectedMessages[type].encode(message).finish(), headerArg)
}
try {
channel.deleteExchange(cleanupDelayedExchange, {ifUnused : false})
connection.on('error', restoreOnFailure)
} catch (error) {
restoreOnFailure(error)
}
}
Exchange declarations in RabbitMQ are assertive. This means that you run a statement that declares the existence of the exchange, and if the exchange does not exist, it will be created at that time. If the exchange already exists, then one of two things will happen:
If it was re-declared with different properties, you will get a channel error, which is indicated by a message followed by channel closure. This is how most errors in RabbitMQ are handled. Thus, your client needs to gracefully handle that (in my opinion, the raising of an exception for this would be an inappropriate use of exceptions).
If it is re-declared with the same parameters as previous, then no action will be done; the channel will remain open and usable as-is.
Editorial: I am not aware of too many valid use cases where I'd want to be deleteing and re-declaring exchanges on a regular basis. I consider the exchange to be part of the static configuration of the broker. Changes to exchange configuration are often best done at the time of application deployment.

Categories

Resources