I got a very annoying problem with PM2's (version 5.1.0) pm2.delete(process, errback) function. After two days of debugging, trying to find the root of the problem, it starts to look like an issue which is related to the PM2 library itself.
Here a simplified code snippet of what I am doing. I will explain the issue afterwards.
const debug = require("debug")("...");
const pm2 = require("pm2");
const Database = require("./Database");
...
class Backend {
static cleanup() {
return new Promise((resolve, reject) => {
pm2.connect((error) => {
if (error) {
debug("...");
reject();
return;
}
// MongoDB query and async forEach iteration
Database.getCollection("...").find({
...
})
.forEach((document) => {
pm2.delete("service_...", (error, process) => {
if (!error && process[0].pm2_env.status === "stopped") {
debug("...");
} else {
debug("...");
}
});
})
.catch(...)
.then(...);
}
});
}
...
}
Now to the problem: my processes do not get terminated and errback of pm2.delete(process, errback) is not executed AT ALL.
I printed the error parameter of the connect callback and it is always null, hence a connection to PM2 is established successfully
I placed debug text directly at the beginning of the delete callback and it is not printed
I wrapped the delete function in a while loop which only stops if the delete callback is executed at least once and the loop runs forever
I started a debug session and noticed that PM2's delete function in node_modules/pm2/lib/API.js (line 533) gets called, but for some reason the process does not get terminated and my provided callback function is not executed at all (I went through the commands step by step in debug mode but still can not tell where it fails to execute the callback (it seems to happen in PM2's API.js though))
I also noticed that when running the code step by step in debug mode with breakpoints that sometimes my process gets terminated with the API call if I cancle the execution at a certain point in between (however, the callback was still not executed)
I use PM2's delete function at another place of my software as well and there it is working like a charm
So for some reason the pm2.delete(process, errback) is not executed correctly and I don't know what to do at this point. Is someone experienced with PM2's source code or had a similar issue at some point? Any advice would be helpful.
It looks like I found the root of the problem:
At a later point in the promise chain after the forEach call, I use pm2.disconnect();. After further investigation I noticed that it is not perfectly synchronized which means in my case that PM2 gets disconnected before the delete function is completely finished. This gives the described results and the weird debugging behaviour.
All in all, the API works therefore perfectly fine but one has to pay very close attention to asynchronous code as it can cause really complicated behaviour which is also hard to debug.
One has to make sure that the delete functions are really finished before pm2.disconnect(); is called.
Related
Long story short, I need to be able to start/stop two child processes inside of a node.js instance. I'm writing tests for my react/express app, and there's one part in the testing where I need to run the api server & the react dev server at the same time, wait for them to fully activate, then perform some operations with them, and then close both of those down and continue with the rest of my testing.
I've looked into shelljs, child_process, and concurrently but I have yet to find a way to elegantly EXIT these processes. I often end up with hanging processes that get left around after the script has terminated completely, or a script that refuses to exit on its own.
For example, here's my (slightly hacky) way of trying to start my server with child_process, wait for it to finish starting up before moving on, and then attempting to close it down.
import { exec } from 'child_process'
const startNode = async () : Promise<ChildProcess> => {
/**
* Starts our backend api
*/
return new Promise((resolve, reject) => {
const child : ChildProcess | null = exec('npm run start:dev:norestart')
if(child === null || !child.stdout) return reject(null)
child.stdout.on('data', data => {
console.log(data)
if(data.indexOf('App listening on port') !== -1) {
resolve(child)
}
})
})
}
const doThing = async () => {
console.log('Start')
const child = await startNode()
child.kill('SIGKILL')
console.log('Finished')
}
doThing()
However, similar to other attempted solutions, my script never exits, and the server never actually stops running (I tested this by opening another terminal at the same time to see if I could run and I got an "error address already in use").
Does anybody know how to run multiple servers as subprocesses in a single node.js instance and close them down properly?
If your server is already running when you start your script, your Promise is never resolved and the script just hangs:
// what if data value is 'error address already in use'? if condition
// will fails and promise is never resolve
if(data.indexOf('App listening on port') !== -1) {
resolve(child)
}
If the server is not running, your code should work but I'd first make sure that Promise is always resolved before searching for other problems
I know that very similar questions have been asked, but I can't find one that fits my issue exactly- feel free to point me that way if there is one.
sendSteamAuthTicket: () => {
return new Promise((resolve, reject) => {
greenworks.getAuthSessionTicket(function(ticket) {
console.log('Successfully retrieved Steam API User Auth Ticket.')
console.log(ticket.ticket.toString('hex'))
resolve(ticket.ticket.toString('hex'))
}, function(e) { throw e })
})
}
Basically, this returns an endlessly pending Promise, leading me to believe I can't resolve a Promise from within a nested callback like this. Unfortunately, I can't send the data via an additional function because electron won't let you interface with the browser that way.
Is it possible to do what I'm trying to do here? e.g. more or less obtain a delayed Promise value within this single function?
Alright, I figured it out- sorry for the redundant question, I've been banging my head on this for a few hours. Hearing that the Promise syntax was correct was apparently enough to get it through.
The greenworks package was being imported incorrectly (or, rather, correctly- according to their docs- but needed a direct file path)
It's a little outdated so I wasn't sure why at first.
There's nothing wrong with the way that you're calling resolve. If the callback is called, resolve will be, too, unless ticket is null or similar. My guess would be that greenworks is calling the error callback and then not re-throwing your thrown error. Try doing reject(e) instead of throw e.
As I have heard, and actually know recursion is far not better solution for many cases if talk about sync code. But i would like to ask someone who more experienced than I am about the solution. What do you think about this code? It will work okay (as I suppose now - cause it is not syncronous) or may be it has some significant (or not so) drawbacks? Where? Why?
Guys, I would very appreciate your help, I'm doubtfull about this part of code.
Maybe there is a better solution for this?
I just want to have function which will be able to run promise function (class method) every exact time span + time for resolving this async functon.
If still enough clear.. Step by step it should -
exec target promise fn ->
waiting for resolve ->
waiting for interval ->
exec target promise fn.
And additionally it should stop if promise fn failed
Thanks in advance!
export function poll(fn: Function, every: number): number {
let pollId = 0;
const pollRecursive = async () => {
try {
await fn();
} catch (e) {
console.error('Polling was interrupted due to the error', e);
return;
}
pollId = window.setTimeout(() => {
pollRecursive();
}, every);
};
pollRecursive();
return pollId;
}
Although you have a call to pollRecursive within the function definition, what you actually do is pass a new anonymous function that will call pollRecursive when triggered by setTimeout.
The timeout sets when to queue the call to pollRecursive, but depending on the contents of that queue, the actually function will be run slightly later. In any case, it allows all other items in the queue to process in turn, unlike a tight loop / tight recursive calls, which would block the main thread.
The one addition you may want to add is more graceful error handling, as transient faults are common on the Internet (i.e. there are a lot of places where something can go missing, which makes regular failed request part of "normal business" for a TypeScript app that polls).
In your catch block, you could still re-attempt the call after the next timer, rather than stop processing. This would handle transient faults.
To avoid overloading the server after a fault, you can back off exponentially (i.e. double the every value for each contiguous fault). This reduces server load, while still enabling your application to come back online later.
If you are running at scale, you should also add jitter to this back off, otherwise the server will be flooded 2, 4, 8, 16, 32... seconds after a minor fault. This is called a stampede. By adding a little random jitter, the clients don't all come back at once.
In some Node.js scripts that I have written, I notice that even if the last line is a synchronous call, sometimes it doesn't complete before Node.js exits.
I have never seen a console.log statement fail to run/complete before exiting, but I have seen some other statements fail to complete before exiting, and I believe they are all synchronous. I could see why the callback of an async function would fail to fire of course in this case.
The code in question is a ZeroMQ .send() call like so:
var zmq = require('zmq');
var pub = zmq.socket('pub');
pub.bindSync('tcp://127.0.0.1:5555');
setInterval(function(){
pub.send('polyglot');
},500);
The above code works as expected...but if I remove setInterval() and just call it like this:
var zmq = require('zmq');
var pub = zmq.socket('pub');
pub.bindSync('tcp://127.0.0.1:5555');
pub.send('polyglot'); //this message does not get delivered before exit
process.exit(0);
...Then the message will not get delivered - the program will apparently exit before the pub.send() call completes.
What is the best way to ensure a statement completes before exiting in Node.js ? Shutdown hooks would work here, but I am afraid that would just be masking the problem since you can't put everything that you need to ensure runs in a shutdown hook.
This problem can also be demonstrated this way:
if (typeof messageHandler[nameOfHandlerFunction] == 'function') {
reply.send('Success');
messageHandler[nameOfHandlerFunction](null, args);
} else {
reply.send('Failure'); //***this call might not complete before the error is thrown below.***
throw new Error('SmartConnect error: no handler for ZMQ message sent from Redis CSV uploader.');
}
I believe this is a legit/serious problem because a lot of programs just need to publish messages and then die, but how can we effectively ensure all messages get sent (though not necessarily received)?
EDIT:
One (potential) way to fix this is to do:
socket.send('xyz');
socket.close(); // supposedly this will block until the above message is sent
process.exit(0);
Diving into zeromq.node, you can see what Socket.send just pushes your data to _outgoing:
this._outgoing.push([msg, flags]);
... and then calls _flush iff zmq.ZMQ_SNDMORE is unset:
this._flush();
Looks like _flush is actually doing the socket write. If _flush() fails, it emits an error.
Edit:
I'm guessing calling pub.unbind() before exiting, will force the _flush() to be called:
pub.unbind('tcp://127.0.0.1:5555', function(err) {
if (err) console.log(err);
process.exit(0); // Probably not even needed
});
I think the simple answer is the the socket.send() method is in fact asynchronous and that is why we see the behavior I described in the OP.
The question then is - why does socket.send() have to be asynchronous - could there not be a blocking/synchronous version that we could use instead for the purpose intended in the OP? Could we please have socket.sendSync()?
I'm trying to write a Sencha Touch 2.0 WebSql proxy that supports tree-data. I started from tomalex0's WebSql/Sqlite proxy. https://github.com/tomalex0
When modifying the script I ran into a strange debugging issue:
(I'm using Chrome 17.0.963.78 m)
The following snipped just got jumped over. The transaction never takes place! But when I set a breakpoint above or below and I run the same code in the console, it does work!
dbConn.transaction(function(tx){
console.log(tx);
if (typeof callback == 'function') {
callback.call(scope || me, results, me);
}
tx.executeSql(sql, params, successcallback, errorcallback);
});
The blue log you can see, the green log is from the success handler. When the query would be performed there would be exactly the same log above (it's a SELECT * FROM ...; so when performing multiple times without changing data I would expect the same result)
I found out that when I add the code block to the watch expressions it also runs.
It isn't being skipped over. It is being scheduled, but not being executed till much later due to the asynchronous nature of the request:
http://ejohn.org/blog/how-javascript-timers-work/
Since the code is being executed synchronously to make the asynchronous call it will delay the call till after the synchronous code has been executed, due to the single threadedness of javascript.