I have two cloud functions
One cloud function is for set or updating existing scheduled job
Canceling an existing scheduled job
I am using import * as the schedule from 'node-schedule'; to manage Scheduling a jobs
The problem is cus createJob function is triggered and jobId is returned, but later when I triger cancelJob function all prev scheduled cron jobs do not exist cus node-schedule lives in memory and I can't access the jobs:
this will return empty object: const allJobs = schedule.scheduledJobs;
Does anyone have some solution on this situation?
UTILS this is the main logic that is called when some of my cloud functions are triggered
enter code here
// sendgrid
import * as sgMail from '#sendgrid/mail';
import * as schedule from 'node-schedule';
sgMail.setApiKey(
'apikey',
);
import {
FROM_EMAIL,
EMAIL_TEMPLATE_ID,
MESSAGING_SERVICE_SID,
} from './constants';
export async function updateReminderCronJob(data: any) {
try {
const {
to,
...
} = data;
const message = {
to,
from: FROM_EMAIL,
templateId: EMAIL_TEMPLATE_ID,
};
const jobReferences: any[] = [];
// Stop existing jobs
if (jobIds && jobIds.length > 0) {
jobIds.forEach((j: any) => {
const job = schedule.scheduledJobs[j?.jobId];
if (job) {
job.cancel();
}
});
}
// Create new jobs
timestamps.forEach((date: number) => {
const job = schedule.scheduleJob(date, () => {
if (selectedEmail) {
sgMail.send(message);
}
});
if (job) {
jobReferences.push({
jobId: job.name,
});
}
});
console.warn('jobReferences', jobReferences);
return jobReferences;
} catch (error) {
console.error('Error updateReminderCronJob', error);
return null;
}
}
export async function cancelJobs(jobs: any) {
const allJobs = schedule.scheduledJobs;
jobs.forEach((job: any) => {
if (!allJobs[job?.jobId]) {
return;
}
allJobs[job.jobId].cancel();
});
}
node-schedule will not work effectively in Cloud Functions because it requires that the scheduling and execution all be done on a single machine that stays running without interruption. Cloud Functions does not fully support this behavior, as it will dynamically scale up and down to zero the number of machines servicing requests (even if you set min instances to 1, it may still reduce your active instances to 0 in some cases). You will get unpredictable behavior if you try to schedule this way.
The only way you can get reliable scheduling using Cloud Functions is with pub/sub functions as described in the documentation. Firebase scheduled functions make this a bit easier by managing some of the details. You will not be able to dynamically control repeating jobs, so you will need to build some way to periodically run a job and check to see if it should run at that moment.
Related
I'm trying to create a scheduled function on Firebase Cloud Functions with third-party APIs. As the size of the data collected through the third-party API and passed to this scheduled function is huge, it returns Function invocation was interrupted. Error: memory limit exceeded.
I have written this index.js (below) with help, but still looking for the way how it should handle the output data of large size inside the scheduler function.
index.js
const firebaseAdmin = require("firebase-admin");
const firebaseFunctions = require("firebase-functions");
firebaseAdmin.initializeApp();
const fireStore = firebaseAdmin.firestore();
const express = require("express");
const axios = require("axios");
const cors = require("cors");
const serviceToken = "SERVICE-TOKEN";
const serviceBaseUrl = "https://api.service.com/";
const app = express();
app.use(cors());
const getAllExamples = async () => {
var url = `${serviceBaseUrl}/examples?token=${serviceToken}`;
var config = {
method: "get",
url: url,
headers: {}
};
return axios(config).then((res) => {
console.log("Data saved!");
return res.data;
}).catch((err) => {
console.log("Data not saved: ", err);
return err;
});
}
const setExample = async (documentId, dataObject) => {
return fireStore.collection("examples").doc(documentId).set(dataObject).then(() => {
console.log("Document written!");
}).catch((err) => {
console.log("Document not written: ", err);
});
}
module.exports.updateExamplesRoutinely = firebaseFunctions.pubsub.schedule("0 0 * * *").timeZone("America/Los_Angeles").onRun(async (context) => {
const examples = await getAllExamples(); // This returns an object of large size, containing 10,000+ arrays
const promises = [];
for(var i = 0; i < examples.length; i++) {
var example = examples[i];
var exampleId = example["id"];
if(exampleId && example) promises.push(setExample(exampleId, example));
}
return Promise.all(promises);
});
Firebase's official document simply tells how to set timeout and memory allocation manually as below, and I'm looking for the way how we should incorporate it with the above scheduler function.
exports.convertLargeFile = functions
.runWith({
// Ensure the function has enough memory and time
// to process large files
timeoutSeconds: 300,
memory: "1GB",
})
.storage.object()
.onFinalize((object) => {
// Do some complicated things that take a lot of memory and time
});
Firebase's official document simply tells how to set timeout and
memory allocation manually as below, and I'm looking for the way how
we should incorporate it with the above scheduler function.
You should do as follows:
module.exports.updateExamplesRoutinely = firebaseFunctions
.runWith({
timeoutSeconds: 540,
memory: "8GB",
})
.pubsub
.schedule("0 0 * * *")
.timeZone("America/Los_Angeles")
.onRun(async (context) => {...)
However you may still encounter the same error if you treat a huge number of "examples" in your CF. As you mentioned in the comment to the other answer it is advisable to cut it into chunks.
How to do that? It's highly depending on your specific case (ex: do you processes 10,000+ examples at each run? Or it is only going to happen once, in order to "digest" a backlog?).
You could treat only a couple of thousands of docs in the scheduled function and schedule it to run every xx seconds. Or you could distribute the work among several instances of the CF by using PubSub triggered versions of your Cloud Function.
I have an app that uses graphql subscriptions for chat functionality. I have managed to successfully get the subscription working however after introducing the withFilter function in order to filter which clients the messages get sent to I am getting the following error on the frontend:
Subscription field must return Async Iterable. Received: undefined
Here is my subscription resolver:
const { PubSub, withFilter } = require('graphql-yoga');
const pubsub = new PubSub();
pubsub.ee.setMaxListeners(30);
const Subscription = {
detailedConversation: withFilter(
() => pubsub.asyncIterator('detailedConversation'),
(payload, args) => {
return true;
}
)
};
module.exports = {
Subscription,
pubsub
};
As the second parameter of withFilter has to be a function that returns a boolean, I have just set this to return true for the time being.
Graphql-yoga uses graphql-subscriptions under the hood and after reading the documentation on implementation here I can't see what i'm doing wrong?
FYI the error occurs when attempting to subscribe for the first time to a conversation, not whilst sending a message or anything
I know this question is old but gonna give my solution, to others that might come looking for exactly the same solution...
First thing to note is that I'm using the graphql-redis-subscriptions implementation instead of the default implementation.
userUpdated: {
subscribe: withFilter((_, args, { pubsub }) => pubsub.asyncIterator('userUpdated'), (payload, vars) => vars.usersId.includes(payload.userUpdated.id))
}
documentation link
you just check the two arguments are the same then return your actions.
UserUpdated:{
withFilter(
() => pubsub.asyncIterator('UserUpdated'),
(payload, variables) => {
return (payload.UserUpdated.id === variables.channelid);
},
),
}
I have been using React for a few months now and am diving deeper in to application structure and design patterns. Of particular interest to me is handling multiple asynchronous calls that may depend on each other, dependant on the logic of user journeys.
I am currently building an app where I have multiple asynchronous operations that include using local storage, connecting and querying Firebase for phone number authentication and also a WebRTC React Native video calling platform.
In trying to figure out how to handle data received from these apis, I have come across a few examples of code where the authors are using plain classes, not class components to help build their application. My question is, what is this design pattern called and how should I approach using it? What are the best resources to read about its use?
For example, with Firebase phone number authentication a developer I found on Github writes this:
class FirebaseService {
user = ''
uid = ''
userStatusDatabaseRef = ''
userStatusFirestoreRef = ''
constructor() {
firebase.auth().onAuthStateChanged((user) => {
if (user) {
const { uid, phoneNumber} = firebase.auth().currentUser._user
UserStore.user.setUID(uid)
UserStore.user.setPhoneNumber(phoneNumber)
let ref = firebase.database().ref(`/Users/${UserStore.user.uid}`)
ref.on('value', this.gotUserData)
}
this.setListenConnection()
})
}
confirmPhone = async (phoneNumber) => {
const phoneWithAreaCode = phoneNumber.replace(/^0+/,'+972');
return new Promise((res, rej) => {
firebase.auth().verifyPhoneNumber(phoneWithAreaCode)
.on('state_changed', async (phoneAuthSnapshot) => {
switch (phoneAuthSnapshot.state) {
case firebase.auth.PhoneAuthState.AUTO_VERIFIED:
await this.confirmCode(phoneAuthSnapshot.verificationId, phoneAuthSnapshot.code, phoneAuthSnapshot)
res(phoneAuthSnapshot)
break
case firebase.auth.PhoneAuthState.CODE_SENT:
UserStore.setVerificationId(phoneAuthSnapshot.verificationId)
res(phoneAuthSnapshot)
break
.......
Here he creates FirebaseService and there is also UserStore referenced. However, I just have a FirebaseService.js file, and pass in a callback to it authorizeWithToken from my container component to store a result to state via Redux in the component.
import firebase from 'react-native-firebase';
export const authorizeFirebase = (phoneNumber, authorizeWithToken) => {
try {
firebase.auth()
.verifyPhoneNumber(phoneNumber)
.on('state_changed', (phoneAuthSnapshot) => {
// console.log(phoneAuthSnapshot);
switch (phoneAuthSnapshot.state) {
case firebase.auth.PhoneAuthState.CODE_SENT: // or 'sent'
console.log('code sent');
break;
case firebase.auth.PhoneAuthState.ERROR: // or 'error'
console.log('verification error');
console.log(phoneAuthSnapshot.error);
break;
case firebase.auth.PhoneAuthState.AUTO_VERIFY_TIMEOUT: // or 'timeout'
console.log('auto verify on android timed out');
break;
// '-------------------------------------------------------------'
case firebase.auth.PhoneAuthState.AUTO_VERIFIED: // or 'verified'
console.log('auto verified on android');
const { verificationId, code } = phoneAuthSnapshot;
const credential = firebase.auth.PhoneAuthProvider.credential(verificationId, code);
firebase.auth().signInWithCredential(credential).then(result => {
const user = result.user;
user.getIdToken().then(accessToken => {
authorizeWithToken(accessToken);
My service is a function, and his is a class, neither of them are components.
To give another example - here is a class 'service' from a code example that is provided by video messaging platform ConnectyCube that I'm using:
import ConnectyCube from 'connectycube-reactnative'
class ChatService {
// Chat - Core
connect(user) {
return new Promise((resolve, reject) => {
if (!user) reject()
ConnectyCube.chat.connect(
{
userId: user.id,
password: user.password,
},
(error, contacts) => {
if (!error && contacts) {
resolve(contacts)
} else {
reject(error)
}
},
)
})
}
disonnect() {
ConnectyCube.chat.disconnect()
}
}
// create instance
const Chat = new ChatService()
// lock instance
Object.freeze(Chat)
export default Chat
I haven't come across classes as services until now but I would really like to know how I should use them and what patterns I should follow. They look super useful and at the moment I have a spaghetti junction of local state, Redux and services that are connected via callbacks and complex async await setups. Any ideas greatly appreciated.
I built a TS, MongoDB Client wrapper. for some reason when I call the function that gets the connection, its callback is called twice.
There are 2 calls in total to the get() function, 1 before the export as you can see and another from a mocha test.
I am pretty new to TS and JS in general, but this seems a bit off.
import {Db, MongoClient} from "mongodb";
import {MongoConfig} from '../config/config'
class DbClient {
private cachedDb : Db = null;
private async connectToDatabase() {
console.log('=> connect to database');
let connectionString : string = "mongodb://" + MongoConfig.host + ":" + MongoConfig.port;
return MongoClient.connect(connectionString)
.then(db => {
console.log('=> connected to database');
this.cachedDb = db.db(MongoConfig.database);
return this.cachedDb;
});
}
public async get() {
if (this.cachedDb) {
console.log('=> using cached database instance');
return Promise.resolve(this.cachedDb);
}else{
return this.connectToDatabase();
}
}
}
let client = new DbClient();
client.get();
export = client;
where the console output is:
=> connect to database
=> connected to database
=> connected to database
Any particular reason this is misbehaving?
There are 2 calls in total to the get() function, 1 before the export as you can see and another from a mocha test.
I suspect the output has an additional => connect to database. As I said in the comments: There's a "race condition" where get() could be called multiple times before this.cachedDb is set which would lead to multiple connections/instances of Db being created.
For example:
const a = client.get();
const b = client.get();
// then
a.then(resultA => {
b.then(resultB => {
console.log(resultA !== resultB); // true
});
});
Solution
The problem can be fixed by storing the promise as the cached value (also, no need to have the async keyword on the methods as Randy pointed out, as there's no values being awaited in any of the methods so you can just return the promises):
import {Db, MongoClient} from "mongodb";
import {MongoConfig} from '../config/config'
class DbClient {
private cachedGet: Promise<Db> | undefined;
private connectToDatabase() {
console.log('=> connect to database');
const connectionString = `mongodb://${MongoConfig.host}:${MongoConfig.port}`;
return MongoClient.connect(connectionString);
}
get() {
if (!this.cachedGet) {
this.cachedGet = this.connectToDatabase();
// clear the cached promise on failure so that if a caller
// calls this again, it will try to reconnect
this.cachedGet.catch(() => {
this.cachedGet = undefined;
});
}
return this.cachedGet;
}
}
let client = new DbClient();
client.get();
export = client;
Note: I'm not sure about the best way of using MongoDB (I've never used it), but I suspect connections should not be so long lived as to be cached like this (or should probably only be cached for a short time and then disconnected). You'll need to investigate that though.
I want to split my rabbitMQ connection code and call it across different components, so that it (the connection and channel) only initializes ONCE and I can use it whenever instead of having to open the connection again when I want to use it.
What happens right now is, I call the below's code function over and over again everytime I want to pass something to my exchange and queue. (so if I want to pass 20 individual data to rabbitMQ, I ended up opening and closing both the connection and channel 20 times)
Any solutions?
const exchange = "Exchange";
const queue = "Queue";
const passSomeData= async payload => {
amqp = require("amqplib").connect("amqp://localhost");
let ch;
let connection;
let publish = amqp
.then(function(conn) {
connection = conn;
return conn.createConfirmChannel();
})
.then(function(chn) {
ch = chn;
ch.assertQueue(queue, { durable: true });
return ch.assertExchange(exchange, "topic", { durable: true });
})
.then(function() {
const data = {
content: "x",
title: "y",
};
ch.bindQueue(queue, exchange, "routingKey");
return ch.publish(exchange, "routingKey", Buffer.from(JSON.stringify(data)), {
persistent: true
});
})
.then(() => {
setTimeout(function() {
connection.close();
}, 250);
});
};
module.exports = passSomeData;
Answer copied from here
This is a general Javascript question and not one specific to RabbitMQ or the amqplib library.
I believe you can open a connection at the module level and use that within your passSomeData method. Or, passSomeData can lazily open a connection if the module-level "connection" variable is null, and then re-use that connection.
At some point you may need to use a connection pool, but that depends on your use-case and workload.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.