We are running an application on Firestore and got a simple trigger that when order's details are created or updated some of it's information should be rewritten in the parent order collection.
The function for this got following code
export const updateOrderDetails = functions
.region(FUNCTION_REGION)
.firestore.document("orders/{orderId}/details/pickupAndDropoff")
.onWrite(async (change, context) => {
return await admin
.firestore()
.collection("orders")
.doc(context.params.orderId)
.set({ pickupAndDropoff: change.after.data() }, { merge: true });
});
It was work fine before, but now at random about every third of its executions is delayed. Sometimes by few minutes. In Cloud Function logs we see normal execution times <200ms, so it seems the trigger runs after a huge pause.
What's worse from time to time our change.after.data() is undefined, but we never delete anything - it's just updates and creates.
It was working fine, we did not changed nothing since last week, but now it started to have this unexpected delays. We've also checked the firebase status, but there are no malfunctions in firebase functions service. What can be the cause of this?
The problem can be due to the monotonically increasing orderId as the parameter passed here:
...
.collection("orders")
.doc(context.params.orderId)
...
If you can check once if the orderId passed here is monotonically increasing with each request? It can lead to hotspots which impacts latency.
To explain, I think the write rate must be changing at different day's and time's - as the user traffic using the application or load testing requests changes - which is creating the unexpected kind of behaviour. At low write rate, the requests are working as expected most of the time. At high write rate, the requests are facing hotspot situation in the firestore as mentioned in the firestore documentation resulting in delays (latency issue).
Here is the relevant link to firestore best practices documentation.
Thanks to Frank van Puffelen suggestion we've sent this question directly to Firebase support and after their internal investigation we've got the reply from an engineering team that it was in fact an infrastructure malfunction.
The reply I got from them was:
I escalated the issue to recover more information. So far it appears that there was an issue with pub/sub delivering and creating the event. The Firestore team is also communicating with the pub/sub team to investigate the issue and prevent future incidents.
It seems that the best way to deal with such problems is to quickly write directly to Firebase support team, because as they mentioned in the automatic reply I got after sending a support ticket:
For Firebase outages not listed on the status dashboard, we'll respond within 4 hours.
which seems to be the best option.
Related
I'm using an extremely simple Cloud Function to (try to) keep my realtime database in sync with the Firestore.
exports.copyDocument = functions.database.ref('/invoices/{companyId}/{documentId}')
.onWrite((change, context) => {
if (!change.after.exists()) {
return null;
}
return admin.firestore().collection('companies').doc(context.params.companyId).collection('invoices')
.doc(context.params.documentId)
.set(change.after.val());
});
Unfortunately, I am seeing issues where sometimes the Cloud Firestore document does not have the latest copy of the realtime DB data. It's infrequent, but nonetheless impacting my end users.
Why is this happening?
Two ideas I had were -
The Firestore is possibly unavailable to write to in the Cloud Function, and I don't have retries enabled in my Cloud Function, so it could just be erroring out. Enabling retries will solve my problem. However, I have absolutely 0 error logs of this happening.
User makes change A, then user makes change B 2-3 seconds later. In this case, it's possible, I guess, that the Cloud Function trigger for change A hasn't executed, but the trigger for change B quickly executes, then the change A trigger executes and copies 'stale' data. Possible remedies would be to fetch the latest version again from the realtime database in my Cloud Function (not ideal, it's nice using change.after.val()), or perhaps keep some incrementing integer on my document, and instead copy it to the Firestore using a transaction that compares versions instead of a simple set().
The only error message I do see in my Cloud Function error logs is:
The request was aborted because there was no available instance. Additional troubleshooting documentation can be found at: https://cloud.google.com/functions/docs/troubleshooting#scalability
But those docs indicate that this error is always retried, even without explicitly enabling function retries.
I am leaning towards issue 1 - the Firestore just 'blips' sometimes, retries aren't enabled, and I'm not logging the failed promise properly. If this is the case - how can I fix this logging?
i have a firestore and project that needs to be updated automatically without user interaction but i do not know how to go about it, any help would be appreciated. take a look at the json to understand better
const party = {
id: 'bgvrfhbhgnhs',
isPrivate: 'true',
isStarted: false,
created_At: '2021-12-26T05:20:29.000Z',
start_date: '2021-12-26T02:00:56.000Z'
}
I want to update the isStarted field to true once the current time is equal to start_date
I think you will need Firebase Cloud Function, although I don't understand exactly what you mean.
With Cloud Functions, you can automatically run (add, delete, update, everything) codes on Google's Servers without the need for application and user interaction.
For example, in accordance with your example, it can automatically set "isStarted" to true when it hits the "start_date" time. If you want to code a system that does not require user interaction and should work automatically, you should definitely use Cloud Functions. Otherwise, you cannot do this on the application side.
For more info visit Cloud Functions
Ok, I managed to find a workaround to updating my documents automatically without user interaction since the google billing service won’t accept my card to enable cloud functions for my project, I tried what I could to make my code work and I don’t know if other peeps would follow my idea or if my idea would solve similar issues.
What I did was that in my nextjs file I created an API endpoint to fetch and update documents after installing the firebase admin SDK, so I fetched all documents and converted all start_date fields in each document to time and checked for documents whose start date is less than or equal to current date, so after getting the document, I ran a firestore function to Update the document.
Tho this will only run when you make a request to my domain.com/api/update-parties and never run again
In other to make it run at scheduled intervals, I signed up for a free tier account at https://www.easycron.com and added my API endpoint to EASYCRON to make requests to my endpoint at a minute interval, so when the request hits my endpoint, it runs my code like other serverless functions😜. Easy peezy.
I currently have a Node.JS server set up that is able to read and write data from a FireBase database when a request is made from a user.
I would like to implement time based events that result in an action being performed at a certain date or time. The key thing here though, is that I want to have the freedom to do this in seconds (for example, write a message to console after 30 seconds have passed, or on Friday the 13th at 11:30am).
A way to do this would be to store the date/time an action needs be performed in the database, and read from the database every second and compare the current date/time with events stored so we know if an action needs to be performed at this moment. As you can imagine though, this would be a lot of unnecessary calls to the database and really feels like a poor way to implement this system.
Is there a way I can stay synced with the database without having to call every second? Perhaps I could then store a version of the events table locally and update this when a change is made to the database? Would that be a better idea? Is there another solution I am missing?
Any advice would be greatly appreciated, thanks!
EDIT:
How I currently initialise the database:
firebase.initializeApp(firebaseConfig);
var database = firebase.database();
How I then get data from the database:
await database.ref('/').once('value', function(snapshot){
snapshot.forEach(function(childSnapshot){
if(childSnapshot.key === userName){
userPreferences = childSnapshot.val().UserPreferences;
}
})
});
The Firebase once() API reads the data from the database once, and then stops observing it.
If you instead us the on() API, it will continue observing the database after getting the initial value - and call your code whenever the database changes.
It sounds like you're looking to develop an application for scheduling. If that's the case you should check out node-schedule.
Node Schedule is a flexible cron-like and not-cron-like job scheduler
for Node.js. It allows you to schedule jobs (arbitrary functions) for
execution at specific dates, with optional recurrence rules. It only
uses a single timer at any given time (rather than reevaluating
upcoming jobs every second/minute).
You then can use the database to keep a "state" of the application so on start-up of the application you read all the upcoming jobs that will be expected and load them into node-schedule and let node-schedule do the rest.
The Google Cloud solution for scheduling a single item of future work is Cloud Tasks. Firebase is part of Google Cloud, so this is the most natural product to use. You can use this to avoid polling the database by simply specifying exactly when some Cloud Function should run to do the work you want.
I've written a blog post that demonstrates how to set up a Cloud Task to call a Cloud Functions to delete a document in Firestore with an exact TTL.
I have an app which has recently started to randomly lose indexeddb items. By "lose" I mean they are confirmed as having been saved, but days later, are no longer present.
My hypothesis is that Chrome is discarding indexeddb items because the disk is full.
My question is specifically, are there any events that my app can listen for, or any Chrome log entries that I can refer to in order to confirm this is the case. NB. I'm not looking to fix the problem, I am only looking for ways in which I can detect it.
Each application can query how much data is stored or how much more
space is available for the app by calling queryUsageAndQuota() method
of Quota API.See
You can use periodic BackgroundSync
to estimate the used and free space allocated to temporary usage using:
// index.html
navigator.serviceWorker.ready.then(registration => {
registration.periodicSync.register('estimate-storage', {
// Minimum interval at which the sync may fire.
minInterval: 24 * 60 * 60 * 1000,
});
});
// service_worker.js
self.addEventListener('periodicsync', event => {
if (event.tag == 'estimate-storage') {
event.waitUntil(estimateAndNotify());
}
});
To estimate use the method: navigator.storage.estimate()//returns a Promise which resolves with {usage, quota} values in bytes.
When TEMPORARY storage quota is exceeded, all the data (incl.
AppCache, IndexedDB, WebSQL, File System API) stored for oldest used
origin gets deleted.
You can switch to unlimitedStorage or persistent storage.
Not sure how will it work in real life
This feature was initially still somewhat experimental.
but here https://developers.google.com/web/updates/2016/06/persistent-storage you can find that
This is still under development [...] the goal is to make users are aware of “persistent” data before clearing it [...] you can presume that “persistent” means your data won’t get cleared without the user being explicitly informed and directly in control of that deletion.
Which can answer you clarification in comments that you are looking for a way
how can a user or my app know for sure that this is happening
So, don't know about your app, but user maybe can get a notification. At least the page is of 2016 year. Something must have been done by now.
You may refer to Browser storage limits and eviction criteria article on MDN.
My hypothesis is that Chrome is discarding indexeddb items because the disk is full
According to this article your hypothesis seems to be true.
Now one of the ways to confirm this could be to save unique ids of items that are confirmed in a persistent storage - (this will have very small size compared to your actual data) and periodically compare whether all items are there in your indexeddb.
You might also want to refer to best practices.
and finally this comments
I think you need IndexedDB Observer. These links may help you if I got what you meant. Link-01 - Link-02
might be helpful.
How should I design an on-login middleware that checks if the recurring subscription has failed ? I know that Stripe fires events when things happen, and that the best practice is webhooks. The problem is, I can't use webhooks in the current implementation, so I have to check when the user logs in.
The Right Answer:
As you're already aware, webhooks.
I'm not sure what you're doing that webhooks aren't an option in the current implementation: they're just a POST to a publicly-available URL, the same as any end-user request. If you can implement anything else in Node, you can implement webhook support.
Implementing webhooks is not an all-or-nothing proposition; if you only want to track delinquent payments, you only have to implement processing for one webhook event.
The This Has To Work Right Now, Customer Experience Be Damned Answer:
A retrieved Stripe Customer object contains a delinquent field. This field will be set to true if the latest invoice charge has failed.
N.B. This call may take several seconds—sometimes into the double digits—to complete, during which time your site will appear to have ceased functioning to your users. If you have a large userbase or short login sessions, you may also exceed your Stripe API rate limit.
I actually wrote the Stripe support team an email complaining about this issue (the need to loop through every invoice or customer if you're trying to pull out delinquent entries) and it appears that you can actually do this without webhooks or wasteful loops... it's just that the filtering functionality is undocumented. The current documentation shows that you can only modify queries of customers or invoices by count, created (date), and offset... but if you pass in other parameters the Stripe API will actually try to understand the query, so the cURL request:
https://api.stripe.com/v1/invoices?closed=false&count=100&offset=0
will look for only open invoices.... you can also pass a delinquent=true parameter in when looking for delinquent customers. I've only tested this in PHP, so returning delinquent customers looks like this:
Stripe_Customer::all(array(
"delinquent" => true
));
But I believe this should work in Node.js:
stripe.customers.list(
{delinquent:true},
function(err, customers) {
// asynchronously called
});
The big caveat here is that because this filtering is undocumented it could be changed without notice... but given how obvious the approach is, I'd guess that it's pretty safe.