Cloud Functions continuously executing after timeout - javascript

I have a firestore app with a cloud function that triggers off a cronjob.
The cloud function takes a long time and pulls a large amount of data. I've set the memory limit of my function to 2Gb and the timeout to 540 second and Retry on failure is NOT checked.
The cloud function essentially looks like this:
export const fetchEpisodesCronJob = pubsub
.topic('daily-tick')
.onPublish(() => {
console.log(`TIMING - Before Fetches ${rssFeeds.length} feeds`, new Date())
return Promise.map(
rssFeeds.map(rssFeed => rssFeed.url),
url => fetch(url).catch(e => e).then(addFeedToDB), // <-- This can take a long time
{
concurrency: 4
}
).catch(e => {
console.warn('Error fetching feeds', e);
})
})
What I see in the logs however is this (Continues indefinitely):
As you can see the function is being finished with a status timeout however it's starting right back up again. What's weird is I've specified a 540 second limit however the timeout comes in at a consistent 5 minute mark. Also note I checked the cloud console and I manually spun off the last cronjob pubsub at 10:00AM yet you can see multiple pubsub triggers since then. (So I believe the cronjob is setup fine)
Also I get consistent errors repeating in the console:
My question is how do I prevent the cloud function from re-executing when it's already been killed due to a timeout. Is this a bug or do I need to explicitly set a kill statement somewhere.
Thanks!

So this is a bug with firebase. According to #MichaelBleigh
Turns out there's a backend bug in Cloud Functions that happens when a function is created with the default timeout but later increased that is causing this. A fix is being worked on and will hopefully address the issue soon.
If you're reading this in between now and when the bug is fixed though I found that the function will be triggered again ever 300 seconds. So an immediate work around for me is to set the timeout for 250 seconds and keep the time complexity of the function as minimal as possible. This may mean increasing the memory usage for the time being.

Related

Asynchronously stopping a loop from outside node.js

I am using node.js 14 and currently have a loop that is made by a recursive function and a setTimeout, something like this:
this.timer = null;
async recursiveLoop() {
//Do Stuff
this.timer = setTimeout(this.recursiveLoop.bind(this), rerun_time);
}
But sometimes this loop gets stuck and I want it to automatically notice it, clean up and restart. So I tried doing something like this:
this.timer = null;
async recursiveLoop() {
this.long_timer = setTimeout(() => throw new Error('Taking too long!'), tooLong);
//Do Stuff
this.timer = setTimeout(this.recursiveLoop.bind(this), rerun_time);
}
main() {
//Do other asynchronous stuff
recursiveLoop()
.then()
.catch((e) => {
console.log(e.message);
cleanUp();
recursiveLoop();
}
}
I can't quite debug where it gets stuck, because it seems quite random and the program runs on a virtual machine. I still couldn't reproduce it locally.
This makeshift solution, instead of working, keeps crashing the whole node.js aplication, and now I am the one stuck. I have the constraint of working with node.js 14, without using microservices, and I never used child process before. I am a complete beginner. Please help me!
If you have a black box of code (which is all you've given us) with no way to detect errors on it and you just want to know when it is no longer generating results, you can put it in a child_process and ask the code in the child process to send you a message every time it runs an iteration. Then, in your main process, you can set a timer that resets itself every time it gets one of these "health" messages from the child. If the timer fires without getting a health message, then the child must be "stuck" because you haven't heard from it within your timeout time. You can then kill the child process at that point and restart it.
But, that is a giant hack. You should FIX the code that gets stuck or at least understand what's going on. Probably you're either leaking memory, file handles, database handles, running code that uses locks and messes up or there are unhandled errors happening. All are indications of code that should be fixed.

Is there a way to minimise CPU usage by reducing number of write operations to chrome.storage?

I am making a chrome extension that keeps track of the time I spend on each site.
In the background.js I am using a map(stored as an array) that saves the list of sites as shown.
let observedTabs = [['chrome://extensions', [time, time, 'icons/sad.png']]];
Every time I update my current site, the starting and ending time of my time on that particular site is stored in the map corresponding to the site's key.
To achieve this, I am performing the chrome.storage.sync.get and chrome.storage.sync.set inside the tabs.onActivated, tabs.onUpdated, windows.onFocusChanged and idle.onStateChanged.
This however results in a very high CPU usage for chrome(around 25%) due to multiple read and write processes from(and to) storage.
I tried to solve the problem by using global variables in background.js and initialising them to undefined. Using the function shown below, I read from storage only when the the current variable is undefined(first time background.js tries to get the data) and at all other times, it just uses the set global variable.
let observedTabs = undefined;
function getObservedTabs(callback) {
if (observedTabs === undefined) {
chrome.storage.sync.get("observedTabs", (observedTabs_obj) => {
callback(observedTabs_obj.observedTabs);
});
} else {
callback(observedTabs);
}
}
This solves the problem of the costly repeated read operations.
As for the write operations, I considered using runtime.onSuspend to write to storage once my background script stops executing, as shown:
chrome.runtime.onSuspend.addListener(() => {
getObservedTabs((_observedTabs) => {
observedTabs = _observedTabs;
chrome.storage.sync.set({"observedTabs": _observedTabs});
});
});
This, however doesn't work. And the documentation also warns about this.
Note that since the page is unloading, any asynchronous operations started while handling this event are not guaranteed to complete.
Is there a workaround that would allow me to minimise my writing operations to storage and hence reduce my CPU usage?

JavaScript: changing timeout for infinite loops?

Sometimes I make mistakes and get infinite loops in JavaScript (example: while(true){}). Firefox helpfully reports "Error: Script terminated by timeout" or that the memory allocation limit is exceeded. How can I change the length of this timeout? I want to shorten it during development.
I could also solve this problem by writing a function that I call inside the loop that counts iterations and throws an exception on too many iterations. But I'm also interested in how to change the timeout value.
I have not seen this timeout documented, and have not found it by searching.
Unfortunately the maximum recursion limit is not user configurable from within a page running javascript. The limits also vary across browsers.
This Browserscope test showcases the results of user testing from another StackOverflow question on the subject: What are the js recursion limits for Firefox, Chrome, Safari, IE, etc?
Aside from writing your own timeout or utilising a promise chain to process data, you won't be able to change the timeout yourself.
There are multiple ways to change the timeout of infinite loops.
One shorter method is setInterval:
setInterval(function() {
document.body.innerHTML += "Hello!<br>";
}, 1000) //This number is, in milliseconds, the interval in which the function is executed.
Another better method is window.requestAnimationFrame. This gives you a higher quality animation(see Why is requestAnimationFrame better than setInterval or setTimeout for more details); here's an example of it in motion:
function run() {
setTimeout(function() {
window.requestAnimationFrame(run);
document.body.innerHTML += "Hello!<br>";
}, 1000); // This sets an interval for the animation to run; if you take the setTimeout out the function will loop as quickly as possible without breaking the browser.
}
run();
D. Joe is correct. In Firefox, browse to about:config and change the number of seconds value in dom.max_script_run_time .
See http://kb.mozillazine.org/Dom.max_script_run_time for details--you can also eliminate the timeout.
This will partially answer your question, but here is how I handle such case. It is more a workaround than a fix.
IMPORTANT: Do not put this code in production environment. It should only be used in local dev while debugging.
As I tend to be debugging when I stumble upon this kind of case, I am mostly likely using console.log to output to console. As such, I override the console.log function as follow anywhere near the entry point of my app:
const log = console.log
const maxLogCount = 200
let currentLogCount = 0
console.log = function (...msg) {
if (currentLogCount >= maxLogCount) {
throw new Error('Maximum console log count reached')
}
currentLogCount++
log(...msg)
}
Then when I accidentally do this
while (true) {
console.log("what is going on?")
}
It will error out after 200 outputs. This will prevent the tab from locking for half a minute and having to reopen a new tab and bla bla bla.
It is usually just the browser's way of saying the script is out of memory. You could solve this by using for loops or creating and index variable and adding to it each time like so:
var index = 0;
while(true){
if(index > certainAmount){
break;
}
index++;
}
If you really want something to go on forever read about setInterval()
or setTimeout()

Rxjs: What scenario do you want to use scheduler

I don't understand what does it mean by scheduler in rxjs documentation, so I'm trying to understand by scenario its useful in, so I can understand scheduler
&tldr;
In most cases you will never need to concern yourself with Schedulers if only for the fact that for 90% of the cases the default is fine.
Explanation
A Scheduler is simply a way of standardizing time when using RxJS. It effectively schedules events to occur at sometime in the future.
We do this by using the schedule method to queue up new operations that the scheduler will execute in the future. How the Scheduler does this is completely up to the implementation. Often though it is simply about choosing the most efficient means of executing a future action.
Take a simple example whereby we are using the timer operator to execute an action at sometime in the future.
var source = Observable.timer(500);
This is pretty standard fare for RxJS. The Scheduler comes in when you ask the question, what does 500 mean? In the default case it will equal 500 milliseconds, because that is what the convention is and that is what the default Scheduler will do, it will wait 500 milliseconds and then emit an event.
However, there are cases where we may not want the flow of time to operate normally. The most common use case for this is when we are testing. We don't actually want to wait 500 milliseconds for a task to complete, otherwise our test suite would take ages to actually complete!
In that case we would actually want to control the flow of time such that we don't have to wait for 500 milliseconds to elapse before we can verify the result of an stream. In this case we could use the TestScheduler which can execute tasks synchronously so that we don't have to deal with any of that asynchronous messiness.
let scheduler = new TestScheduler();
//Overrides the default scheduler with the default scheduler
let source = Observable.timer(500, scheduler);
//Subscribe to the source, which behaves normally
source.subscribe(x => expect(x).to.be(0));
//When this gets called all pending actions get executed.
scheduler.flush();
There are some other more corner cases where we want to alter the flow of time as well. For instance if we are operating in the context of a game we would likely want to link our scheduling to the requestAnimationFrame or to some other faux time scale, which would necessitate the use of something like the AnimationFrameScheduler or the VirtualTimeScheduler.
Scenario 1
you have initial value and want subscriber to get first initial value and then get some other value (depanding on condition).
const dispatcher = (new BehaviorSubject("INITIAL"))
.pipe(observeOn(asyncScheduler));
let did = false; // condition
dispatcher.pipe(
tap((value) => {
if(!did) {
did = true;
dispatcher.next("SECOND");
}
}))
.subscribe((state) => {
console.log('Subscription value: ', state);
});
//Output: Initial ... SECOND
Without .pipe(observeOn(asyncScheduler)) it will output vise-verse since Subject .next is sync operation.
codepen example

Counter-Up not increasing int in Meteor application

I am using the Counter-Up plugin for my Meteor application.
On first site load it works fine, but there seems to be a problem with real-time changes.
I want to display total games created in my web app, so I have this helper:
totalGames: function () {
return Games.find().count();
}
and this is my rendered function:
Template.home.rendered = function () {
$('.counter').counterUp({
delay: 10,
time: 500
});
};
Now the problem is, the counter does not increase, due to the reactivity. So if user A sees the counter with the number 1 on the home site and suddenly a new game is created, the number changes to 12 and not 2.
How can I solve this issue?
Any help would be greatly appreciated.
Give this a try:
Template.home.rendered = function() {
this.autorun(function() {
if (Template.currentData() && Games.find().count())
$('.counter').counterUp({delay: 10, time: 500});
});
};
In theory, this should rerun the counterUp initialization any time the Games count changes. The Template.currentData business is just a hack to make the the autorun work inside of rendered - see my related answer here.
When I get this behavior I run this. I've also run into issues with randomness. The delay is caused by the minimongo which sits on your browser not getting the correct cache from mongodb.
Here are things that I do to kick start the ddp to get the data through the wire.
1) I close out of the terminal. This should have 0 affect, but sometimes things get hung up. Just Try it.
2) rm -rf ~/[name of project].meteor/local/.mirror
3) double check I saved my changes.

Categories

Resources