The documentation for a child process' error event says the following:
The 'error' event is emitted whenever:
The process could not be spawned, or
The process could not be killed, or
Sending a message to the child process failed.
See also subprocess.kill() and subprocess.send().
The first case is met presumably when the cp.spawn method fails to spawn the child process successfully.
Is the bit at the bottom suggesting that case 2 and 3 can only be met when the kill and send methods fail? For instance, if the child process fails to die by other means (like when calling process.kill), the error event would not be raised. It seems that that would be the case, but I want to confirm.
If I'm never calling kill or send, can I safely not consider those cases?
My suspicion was correct (I am fairly certain) as how would the OS be able to hook back into Node's code to raise the right event?
Searching for emit('error', in Node's child_process source shows you that it can be raised by cp.spawn, kill, send and disconnect.
If the process is killed by some other means, then that event could not get raised.
Related
When I run code
var ws = new WebSocket("wss://mydomain.example/socket/service");
ws.addEventListener("error", function (error)
{
console.error("Got error=", error);
});
Is it possible that the WebSocket connection fails (emit error) before I can attach the event listener for the error event?
Looking at the documentation https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/WebSocket I cannot see this detail documented anywhere.
According to the WHATWG spec it seems that the constructor should run the request in parallel – is there a guarantee that I can attach the error listener before any possible errors can raise?
The WebSocket constructor is run without synchronization of any kind and the connection may indeed encounter an error before the line with ws.addEventListener("error", ...); is executed! However, this is not a problem because the spec also says that in case of error, the actual error event is fired as a part of steps that must be queued as a task. In practice, this means that logically the WebSocket constructor is required to behave as if it would run an anonymous function with zero timeout which fires the error event.
So the actual error can happen before the JS code can attach the event listener but all the events (open, close, error, message) can only be fired delayed after the event loop is executed next time so the above code will always have to time attach the event handlers before the events can be fired.
See https://github.com/whatwg/websockets/issues/13#issuecomment-1039442142 for details.
I am trying to test some code that uses web workers. I want to check that the error path works; i.e., that an onError handler correctly recovers from an exception thrown in the web worker.
The problem I'm running into is that the exception propagating out of the web worker causes it to be considered unhandled. For example, it prints in the browser's console log and causes my testing environment (a simple wrapper around Karma) to consider the test as failed.
So, How do I indicate to the browser/Karma that an exception bubbling out of a given web worker is expected, and should not be considered unhandled? I don't want it to print to the console, and I want my test to pass.
The only idea I've come up with is to wrap the web worker code in a try/catch and marshal the caught exception out via postMessage, but that requires throwing away quite a lot of information because of the need to stringify the error object (otherwise it triggers a data clone error).
Call preventDefault on the error event object given to the onError handler.
worker.onError = function(e) {
e.preventDefault(); // <-- "Hey browser, I handled it!"
...
}
var spawn = require('child_process').spawn;
var child = spawn('some-command');
I know I can guard against ENOENT (when some-command doesn't exist) with
child.on('error', function(err) { ... })
Now, how do I asynchronously determine the process is running and no error has happened?
I could listen for error and close events, but that still leaves the case of "is running" looking identical to "the operating system hasn't gotten around to looking for the file yet", which can cause nasty race conditions.
An open-event would be nice, but the docs don't mention one.
Does a functional workaround exist?
The information you are looking for (process running, no errors) is not available from the Host OS in such an easy-to-use format.
Unless the child process prints something that is parsed and tracked (by code you or someone will have to write) in node, or exits with a status code, there is no indication available from the OS that nodejs or iojs can obtain from a system call and wrap in a JS API for the developer.
At least on Linux, the operating system status of a process is limited to one of:
process is running; OR
process has exit (status number indicates OK or ERROR); OR
non-existent (no such PID)
Furthermore, once the exit status has been retrieved with wait() or waitpid(), it is no longer available.
The idea of an "error" is often application dependent and these application errors are not tracked by the operating system -- except for the exit status integer the process reports when it exits.
To give a clearer example, many apps have commands that open files for processing, and will print an error message when an input file can not be opened and proceed to the next command. This failure is not part of a process status that is tracked by PID in the operating system and kept in memory somewhere so it can be read from another process. It may appear in the stderr or stdout stream and can be read that way, but requires specific coding for it to be interpreted correctly by parent or other processes. Alternatively, other apps will exit immediately when something has gone seriously wrong, and set exit status to a non-zero number, indicating error. That exit status and the fact that the process terminated are available from the operating system.
See also: Just check status process in c
I'm trying to listen for data changes in my firebase using firebase's package for Node. I'm using the on() method which is supposed to listen for changes non-stop (as opposed to once() method that only listens to the first occurrence of a specific event ) My listener.js file on the server is exactly like this:
var Firebase=require('firebase');
var Ref= new Firebase('https://mydatabase.firebaseio.com/users/');
Ref.on('child_changed',function(childsnapshot,prevchildname){
Ref.child(childsnapshot.key()).push("I hear you!");
} ) ;
But it only works the for the first occurrence and throws a fatal memory error after a second occurrence.
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
I'm very new to server side programming and don't know what to do. I must be missing something important. Should I set up special server settings with node first? or maybe make a daemon that runs a script with once() method every second or so ?
I'm pretty sure you're creating an endless loop here:
You push a value to https://mydatabase.firebaseio.com/users/
the on('child_changed' event fires in your script
your script pushes a new child under the value
so we go back to step 2 and repeat
It will happen quite rapidly too, since Firebase clients fire local events straight away.
It looks like you're trying to create a chat bot. Which means you more likely want to create sibling messages:
var Firebase=require('firebase');
var ref= new Firebase('https://mydatabase.firebaseio.com/users/');
ref.on('child_changed',function(childsnapshot,prevchildname){
ref.push("I hear you!");
}) ;
Note that it is pretty inefficient to use StackOverflow to debug code. Since you seem to be on Windows, I recommend installing Visual Studio and its node tools. They have a great debugger that allows you to step through the code. Setting a breakpoint in your callback (so in the line with ref.push), will quickly show you what is going wrong.
Using angular and socket.io I am getting duplicate events on the client everytime the server emits. Angular is only included once, there is only one socket.io connection, and there is only one listener per event on the client. Upon receiving an event on the server, data is logged, and this process only ever happens once. Then the data is emitted and the callback is called twice on the client, despite only being in scope once(to my knowledge).
client:
//inside a controller
var thing ='foo';
socket.emit('sentUp',thing)
socket.on('sentDown',function(thing){
console.log(thing)//this happens twice
});
server:
/*
node&express stuff here
*/
socket.on('connection',function(socket){
socket.on('sentUp',function(stuff){
console.log('this happened')//x1
socket.emit('sendDown',stuff);
});
})
Most likely your controllers are being loaded more than once. You can easily check it by logging.
Move out the socket code from the controllers and put them in a service where they're only called once.
I have found in my own socket.io client code that some of the connect events can occur each time the client reconnects. So, if the connected is lost for any reason and then the client automatically reconnects, the client may get a connect event again.
If, like me, you're adding your event handlers in the 'connect' event, then you may be accidentially adding multiple event handlers for the same event and thus you would think you were seeing duplicate data. You don't show that part of your client code so I don't know you're doing it that way, but this is an issue that hit me and it is a natural way to do things.
If that is what is happening to you, there are a couple possible work-arounds:
You can add your event handlers outside the connect event. You don't have to wait for connection to finish before adding event handlers. This way, you'd only ever do them once.
Before adding the event handlers you add upon connection, you can remove any previous event handlers that were installed upon connection to make sure you never get dups.