I have multiple firebase triggers organized as follows:
Function1.ts:
exports.fileupload = db.collection("/x").onSnapshot(async (snap) => {
snap.docChanges().forEach((change) => {//something})
});
Function2.ts:
exports.something = db.collection("/y").onSnapshot(async (snap) => {
snap.docChanges().forEach((change) => {//something})
});
Then in index.ts:
const ai = require("./Function1");
const users = require("./Function2");
exports.fileupload = ai.fileupload ;
exports.something = users.something;
This causes the Function1.ts function to trigger multiple times when a new document is added. If I don't export them from index.ts as well then it triggers only once but then any firebase auth functions don't trigger at all. Is there a better way to organise this?
I understand that triggers can trigger multiple times for the same event, but I also can't find the eventId for onSnapshot as specified here
What causes this?
I don't see any database triggers in your code at all. You have two snapshot query listeners defined, but they are not triggers as you see in the documentation. If you had a Firestore trigger, it would look like this:
const functions = require('firebase-functions');
exports.myFunction = functions.firestore
.document('my-collection/{docId}')
.onWrite((change, context) => { /* ... */ });
If you define snapshot listeners at the top level of your code, they will execute exactly once for each server instance that is allocated to run any other actual trigger that happens. Any number of server instances can be allocated, based on the load applied to your function.
Since we can't see any of the code that actually defines the trigger, we can't fully understand what's going on here. But it's generally safe to say that long-running snapshot listeners are not appropriate for Cloud Functions code, which is stateless in nature. If you want to query the database in a trigger, you should use a get() instead of a listener.
Related
I am fairly familiar with invoking firebase firestore functions manually for testing. I have done this in the past with the docs here. In short I make a wrappedOnWrite variable, assign it to my admin sdk.wrap(my firebase function), and then I invoke it with an object that represents the firestore collection before and after the call. Example:
const testEnv = functions({ projectId: **** }, '****');
let wrappedOnWrite:
| WrappedScheduledFunction
| WrappedFunction<Change<QueryDocumentSnapshot>>;
beforeAll(() => {
// wrap the firebase function, ready to be invoked
wrappedOnWrite = testEnv.wrap(moderateCompetition);
});
const changeDoc: Change<QueryDocumentSnapshot> = {
before: testEnv.firestore.makeDocumentSnapshot(dataBefore, path),
after: testEnv.firestore.makeDocumentSnapshot(dataAfter, path),
};
// call firebase function with the changes inputted to the function
await wrappedOnWrite(changeDoc);
This is great for testing my firestore collections with jest, however, I am never seen this done with firebase storage, and I haven't been able to find many useful resources either. I have a pretty basic firestore .onFinalize function for storage:
export const blurImages = functions.storage.object().onFinalize(async (object) => {}
Is there any way to manually test it like in the example I gave above? When I initially deployed the function, it ran recursively and charged my company a couple of dollars, and I need to be able to run it periodically so that recession doesn't happen on the live function. Thanks in advance :)
If I use createListenerMiddleware to create 2 separate middlewares, that handle different responsibilities, is that going to have any significant performance cost over creating a single listener middleware? Are there any undesirable ways they can interact?
One concern is if I dispatch addListener, removeListener or clearAllListeners, are they going to reach both middlewares, or be consumed by the first in the chain?
The alternative is creating a single shared listener middleware in its own module that can be imported to both the other modules to have listeners added, which would likely cover most use-cases but not if I want to pass in different extra arguments or different error handling.
I created the listener middleware :)
You should really only have one listener middleware instance in an app, and I can't really think of a good reason why you would want to have multiple listener middleware instances. And yes, the add/remove listener actions will definitely be handled and stopped by the first listener middleware instance that sees them, with no differentiation.
If the goal is to have separate listener+effect definitions that don't rely on importing the same middleware instance, I'd suggest trying the third file organization approach shown in the docs, where feature files export a callback that gets startListening as a parameter:
https://redux-toolkit.js.org/api/createListenerMiddleware#organizing-listeners-in-files
// feature1Slice.ts
import type { AppStartListening } from '../../app/listenerMiddleware'
const feature1Slice = createSlice(/* */)
const { action1 } = feature1Slice.actions
export default feature1Slice.reducer
export const addFeature1Listeners = (startListening: AppStartListening) => {
startListening({
actionCreator: action1,
effect: () => {},
})
}
exports.createHook = functions.database.ref('/Responses/{ResponsesId}').onCreate((snap, context) => {
console.log('triggered')
var response = snap.val
console.log(response.text);
});
I've written to firestore and its not triggering, what am I missing?
Here is a picture of my functions panel its clearly deploying to the cloud
Here is a picture of the logs its only logging when the function is built so it isn't executing.
I have written to firestore and it is not triggering
You may have written to Firestore, but your code is written as a Firebase Realtime Database trigger (it uses functions.database). You need to use Firestore triggers to respond to events in Firestore (alternatively, you need to write your updates into a Realtime Database, not Firestore).
It is very easy to get these confused (they're named so similarly!) but they are not the same and need to be coded differently.
For example, the prototype for a Firestore onCreate trigger should look something like:
exports.createHook = functions.firestore
.document('Responses/{ResponsesId}')
.onCreate((change, context) => { .... }
Also in the comment thread I note that you said "onCreate should call every time there is a write to the reference". This is not correct. onCreate should only be called when the document is first written to.
I'm using redux and redux-saga in an application to manage state and asynchronous actions. In order to make my life easier, I wrote a class that acts essentially as a saga manager, with a method that "registers" a saga. This register method forks the new saga and combines it with all other registered sagas using redux-saga/effects/all:
class SagasManager {
public registerSaga = (saga: any) => {
this._sagas.push(fork(saga));
this._combined = all(this._sagas);
}
}
This class is then used by my store to get the _combined saga, supposedly after all sagas are registered:
const store = Redux.createStore(
reducer,
initialState,
compose(Redux.applyMiddleware(sagaMiddleware, otherMiddleware)),
);
sagaMiddleware.run(sagasManager.getSaga());
However, I ran into the problem that depending on circumstances (like import order), this doesn't always work as intended. What was happening was that some of the sagas weren't getting registered before the call to sagaMiddleware.run.
I worked around this by providing a callback on SagasManager:
class SagasManager {
public registerSaga = (saga: any) => {
this._sagas.push(fork(saga));
this._combined = all(this._sagas);
this.onSagaRegister();
}
}
And then the store code can use this as
sagasManager.onSagaRegister = () => sagaMiddleware.run(sagasManager.getSaga());
This seems to work, but I can't find in the docs whether this is safe. I did see that .run returns a Task, which has methods for canceling and the like, but since my problem is only in that awkward time between when the store is constructed and the application is rendered I don't that would be an issue.
Can anyone explain whether this is safe, and if not what a better solution would be?
It may depend on what you mean by "safe". What exactly do you mean by that in this case?
First, here's the source of runSaga itself, and where it gets used by the saga middleware.
Looking inside runSaga, I see:
export function runSaga(options, saga, ...args) {
const iterator = saga(...args)
// skip a bunch of code
const env = {
stdChannel: channel,
dispatch: wrapSagaDispatch(dispatch),
getState,
sagaMonitor,
logError,
onError,
finalizeRunEffect,
}
const task = proc(env, iterator, context, effectId, getMetaInfo(saga), null)
if (sagaMonitor) {
sagaMonitor.effectResolved(effectId, task)
}
return task
}
What I'm getting out of that is that nothing "destructive" will happen when you call runSaga(mySagaFunction). However, if you call runSaga() with the same saga function multiple times, it seems like you'll probably have multiple copies of that saga running, which could result in behavior your app doesn't want.
You may want to try experimenting with this. For example, what happens if you have a counter app, and do this?
function* doIncrement() {
yield take("DO_INCREMENT");
put({type : "INCREMENT"});
}
sagaMiddleware.runSaga(doIncrement);
sagaMiddleware.runSaga(doIncrement);
store.dispatch({type : "DO_INCREMENT"});
console.log(store.getState().counter);
// what's the value?
My guess is that the counter would be 2, because both copies of doIncrement would have responded.
If that sort of behavior is a concern, then you probably want to make sure that prior sagas are canceled.
I actually ran across a recipe for canceling sagas during hot-reloading a while back, and included a version of that in a gist for my own usage. You might want to refer to that for ideas.
So, here the thing.
We have main process and renderer and they`re is isolated from each other and only bridge is IPC messages(to main process) or BrowserWindow webcontents send(to main process).
I'm handling my events in componentDidMount like so:
ipcRenderer.on('deviceWasPlugged', this.deviceWasPluggedListener);
And ofc I'll remove this listener with componentWillUnount for preventing memory leaking in case the component was destroyed:
ipcRenderer.removeListener('deviceWasPlugged', this.deviceWasPluggedListener);
Obviously I have a such method in my Class/Component
deviceWasPluggedListener = (e, somedata) => {
// boring stuff goes there...
}
All is peachy, except one fact - It doesn't seems right as for me.
I still think, there should be a better place to keep my events listeners.
I think, I cant keep it in redux middleware, coz event based model won't allow me to unsubscribe of this events in redux middleware and multiple events will be created when I'll be trying to dispatch smth, even if I'll switch it by action.type:
const someMiddleware = store => next => action => {
if (action.type === 'SOME_TYPE') {
// ...
}
next(action);
};
export default someMiddleware;
So, guys, please tell me is there any better place to keep my backend events? It should be triggered every time I want it, and it shouldn't cause memory leaking problem with maxEventListeners.