I am fairly familiar with invoking firebase firestore functions manually for testing. I have done this in the past with the docs here. In short I make a wrappedOnWrite variable, assign it to my admin sdk.wrap(my firebase function), and then I invoke it with an object that represents the firestore collection before and after the call. Example:
const testEnv = functions({ projectId: **** }, '****');
let wrappedOnWrite:
| WrappedScheduledFunction
| WrappedFunction<Change<QueryDocumentSnapshot>>;
beforeAll(() => {
// wrap the firebase function, ready to be invoked
wrappedOnWrite = testEnv.wrap(moderateCompetition);
});
const changeDoc: Change<QueryDocumentSnapshot> = {
before: testEnv.firestore.makeDocumentSnapshot(dataBefore, path),
after: testEnv.firestore.makeDocumentSnapshot(dataAfter, path),
};
// call firebase function with the changes inputted to the function
await wrappedOnWrite(changeDoc);
This is great for testing my firestore collections with jest, however, I am never seen this done with firebase storage, and I haven't been able to find many useful resources either. I have a pretty basic firestore .onFinalize function for storage:
export const blurImages = functions.storage.object().onFinalize(async (object) => {}
Is there any way to manually test it like in the example I gave above? When I initially deployed the function, it ran recursively and charged my company a couple of dollars, and I need to be able to run it periodically so that recession doesn't happen on the live function. Thanks in advance :)
Related
I have multiple firebase triggers organized as follows:
Function1.ts:
exports.fileupload = db.collection("/x").onSnapshot(async (snap) => {
snap.docChanges().forEach((change) => {//something})
});
Function2.ts:
exports.something = db.collection("/y").onSnapshot(async (snap) => {
snap.docChanges().forEach((change) => {//something})
});
Then in index.ts:
const ai = require("./Function1");
const users = require("./Function2");
exports.fileupload = ai.fileupload ;
exports.something = users.something;
This causes the Function1.ts function to trigger multiple times when a new document is added. If I don't export them from index.ts as well then it triggers only once but then any firebase auth functions don't trigger at all. Is there a better way to organise this?
I understand that triggers can trigger multiple times for the same event, but I also can't find the eventId for onSnapshot as specified here
What causes this?
I don't see any database triggers in your code at all. You have two snapshot query listeners defined, but they are not triggers as you see in the documentation. If you had a Firestore trigger, it would look like this:
const functions = require('firebase-functions');
exports.myFunction = functions.firestore
.document('my-collection/{docId}')
.onWrite((change, context) => { /* ... */ });
If you define snapshot listeners at the top level of your code, they will execute exactly once for each server instance that is allocated to run any other actual trigger that happens. Any number of server instances can be allocated, based on the load applied to your function.
Since we can't see any of the code that actually defines the trigger, we can't fully understand what's going on here. But it's generally safe to say that long-running snapshot listeners are not appropriate for Cloud Functions code, which is stateless in nature. If you want to query the database in a trigger, you should use a get() instead of a listener.
When using const db = firebase.database(), does it matter where I declare this in a cloud function script?
For example, index.ts which contains all of my cloud functions, should I declare it at the top, or in each individual function?
const db = firebase.database()
export const functionA = functions.https.onCall(async (data, context) => {
// use db here
});
export const functionB = functions.https.onCall(async (data, context) => {
// use db here
});
OR
export const functionA = functions.https.onCall(async (data, context) => {
const db = firebase.database()
});
export const functionB = functions.https.onCall(async (data, context) => {
const db = firebase.database()
});
Or does this not matter?
The first approach creates the db instance when the code loads.
The second approach creates the db instance when the code runs.
Neither is pertinently better than the other, and the firebase.database() operation is very lightweight so it's likely to make little difference in practice.
What does make a difference is whether you load the database SDK to begin with. Some of the SDKs in Firebase are quite big, and not each Cloud Function needs all SDKs. So a common trick to speed up load/cold-start times is to move the require('firebase-...') statements into the body of the Cloud Function(s) that require them.
Normally, if you want to interact, from a Cloud Function, with the Realtime Database you just need to initialize the Admin SDK and get the Database service for the default app (or a given app), as explained here in the doc.
So you would do something like:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
// Get the Database service for the default app
const db = admin.database();
To get more details on what Frank explains about Cold Start in his answer, you should read the following article: https://medium.com/firebase-developers/organize-cloud-functions-for-max-cold-start-performance-and-readability-with-typescript-and-9261ee8450f0
However in your case, since your two Cloud Functions use the Admin SDK, there shouldn't be any difference between the two approaches presented in your question, as Frank mentioned.
exports.createHook = functions.database.ref('/Responses/{ResponsesId}').onCreate((snap, context) => {
console.log('triggered')
var response = snap.val
console.log(response.text);
});
I've written to firestore and its not triggering, what am I missing?
Here is a picture of my functions panel its clearly deploying to the cloud
Here is a picture of the logs its only logging when the function is built so it isn't executing.
I have written to firestore and it is not triggering
You may have written to Firestore, but your code is written as a Firebase Realtime Database trigger (it uses functions.database). You need to use Firestore triggers to respond to events in Firestore (alternatively, you need to write your updates into a Realtime Database, not Firestore).
It is very easy to get these confused (they're named so similarly!) but they are not the same and need to be coded differently.
For example, the prototype for a Firestore onCreate trigger should look something like:
exports.createHook = functions.firestore
.document('Responses/{ResponsesId}')
.onCreate((change, context) => { .... }
Also in the comment thread I note that you said "onCreate should call every time there is a write to the reference". This is not correct. onCreate should only be called when the document is first written to.
I'm trying to deploy a cloud function that will trigger whenever a document is added to a particular collection as below:
const admin = require("firebase-admin");
const functions = require("firebase-functions");
const Firestore = require("#google-cloud/firestore");
const firestore = new Firestore({ projectId: config.projectId });
admin.initializeApp(config);
exports.checkCapacity = functions.firestore.document("gran_canaria/las_palmas_1/miller_pendientes/{albnum}")
.onCreate(async (snapshot, context) => {});
However this throws the Deployment failure error:
Failed to configure trigger providers/cloud.firestore/eventTypes/document.create#firestore.googleapis.com (gcf.us-central1.checkCapacity)
The error clears if I remove the wildcard and change the reference to:
"gran_canaria/las_palmas_1/miller_pendientes/albnum"
I've attempted changing the method to onWrite(), deleting and re-deploying the function and checking the cloud status at https://status.cloud.google.com/ but can't find any solutions.
I have been able to deploy successfully a Cloud Function with a trigger on an onCreate event on my Cloud Firestore.
I have been successful by imply using the provided template in the Console UI when creating the Cloud with the following:
The index.js used is the sample provided by GCP when created the function, which simply prints to the logs which document triggered the change.
Looking at the documentation in Firestore, I see that you probably used the samples provided there, so maybe using the above settings will make it work for you.
I just started to learn nodejs-postgres and found the pg-promise package.
I read the docs and examples but I don't understand where should I put the initialization code? I using Express and I have many routes.
I have to put whole initialization (including pg-monitor init) to every single file where I would like to query the db or I need to include and initalize/configure them only in the server.js?
If I initialized them only in the server.js what should I include other files where I need a db query?
In other words. Its not clear to me if pg-promise and pg-monitor configuration/initalization was a global or a local action?
It's also unclear if I need to create a db variable and end pgp for every single query?
var db = pgp(connection);
db.query(...).then(...).catch(...).finally(**pgp.end**);
You need to initialize the database connection only once. If it is to be shared between modules, then put it into its own module file, like this:
const initOptions = {
// initialization options;
};
const pgp = require('pg-promise')(initOptions);
const cn = 'postgres://username:password#host:port/database';
const db = pgp(cn);
module.exports = {
pgp, db
};
See supported Initialization Options.
UPDATE-1
And if you try creating more than one database object with the same connection details, the library will output a warning into the console:
WARNING: Creating a duplicate database object for the same connection.
at Object.<anonymous> (D:\NodeJS\tests\test2.js:14:6)
This points out that your database usage pattern is bad, i.e. you should share the database object, as shown above, not re-create it all over again. And since version 6.x it became critical, with each database object maintaining its own connection pool, so duplicating those will additionally result in poor connection usage.
Also, it is not necessary to export pgp - initialized library instance. Instead, you can just do:
module.exports = db;
And if in some module you need to use the library's root, you can access it via property $config:
const db = require('../db'); // your db module
const pgp = db.$config.pgp; // the library's root after initialization
UPDATE-2
Some developers have been reporting (issue #175) that certain frameworks, like NextJS manage to load modules in such a way that breaks the singleton pattern, which results in the database module loaded more than once, and produce the duplicate database warning, even though from NodeJS point of view it should just work.
Below is a work-around for such integration issues, by forcing the singleton into the global scope, using Symbol. Let's create a reusable helper for creating singletons...
// generic singleton creator:
export function createSingleton<T>(name: string, create: () => T): T {
const s = Symbol.for(name);
let scope = (global as any)[s];
if (!scope) {
scope = {...create()};
(global as any)[s] = scope;
}
return scope;
}
Using the helper above, you can modify your TypeScript database file into this:
import * as pgLib from 'pg-promise';
const pgp = pgLib(/* initialization options */);
interface IDatabaseScope {
db: pgLib.IDatabase<any>;
pgp: pgLib.IMain;
}
export function getDB(): IDatabaseScope {
return createSingleton<IDatabaseScope>('my-app-db-space', () => {
return {
db: pgp('my-connect-string'),
pgp
};
});
}
Then, in the beginning of any file that uses the database you can do this:
import {getDB} from './db';
const {db, pgp} = getDB();
This will ensure a persistent singleton pattern.
A "connection" in pgp is actually an auto-managed pool of multiple connections. Each time you make a request, a connection will be grabbed from the pool, opened up, used, then closed and returned to the pool. That's a big part of why vitaly-t makes such a big deal about only creating one instance of pgp for your whole app. The only reason to end your connection is if you are definitely done using the database, i.e. you are gracefully shutting down your app.