I just got started with nodejs and using it to create a server. Clients connect to it using socket.io, receive jobs to proceess, and send the results back to the nodejs server.
However the server will crash occassionally with the following error:
node.js:201
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: connect ECONNREFUSED
at errnoException (net.js:614:11)
at Object.afterConnect [as oncomplete] (net.js:605:18)
I have no idea what is causing this.
ECONNREFUSED means you tried to make a connection to another machine but your connection was refused - either no one was listening or a firewall blocked you.
I have seen this using http, but I think it could also happen using straight sockets.
You are probably struggling with node.js dying whenever the server you are calling refuses to connect. Try this:
process.on('uncaughtException', function (err) {
console.log(err);
});
It will just log the errors and keep your service up and running.
Related
I have run into this error with my Node server when trying to access a Filemaker Server driven backend both locally and remotely.
HTTP 6369: SOCKET ERROR: read ECONNRESET Error: read ECONNRESET at TLSWrap.onStreamRead (internal/stream_base_commons.js:205:27)
After trying everything I can find, triple-checking my FMS settings and logs, restarting the server and the machine several times, I cannot figure out what exactly is going wrong and how to solve it.
I have seen some answers to similar questions here on SO, but for the most part they just suggest catching the error in the express route (has not been changed for months and was working fine before the error), which I have done but it just outputs a more verbose version of the main error above:
HTTP 6369: SOCKET ERROR: read ECONNRESET Error: read ECONNRESET
at TLSWrap.onStreamRead (internal/stream_base_commons.js:205:27)
at TLSSocket.Readable.on (_stream_readable.js:838:35)
at tickOnSocket (_http_client.js:691:10)
at onSocketNT (_http_client.js:744:5)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
at ClientRequest.onSocket (_http_client.js:732:11)
at setRequestSocket (_http_agent.js:436:7)
at handleSocketCreation_Inner (_http_agent.js:429:7)
at oncreate (_http_agent.js:286:5)
at Agent.createSocket (_http_agent.js:291:5)
at Agent.addRequest (_http_agent.js:248:10)
at new ClientRequest (_http_client.js:296:16)
at Object.request (https.js:314:10)
at Request.start (/Users/******/Documents/*****/code/******/node_modules/request/request.js:751:32)
THE ERR : Error: read ECONNRESET
at TLSWrap.onStreamRead (internal/stream_base_commons.js:205:27)
at TLSSocket.Readable.on (_stream_readable.js:838:35)
at tickOnSocket (_http_client.js:691:10)
at onSocketNT (_http_client.js:744:5)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
---------------------------------------------
at ClientRequest.onSocket (_http_client.js:732:11)
at setRequestSocket (_http_agent.js:436:7)
at handleSocketCreation_Inner (_http_agent.js:429:7)
at oncreate (_http_agent.js:286:5)
at Agent.createSocket (_http_agent.js:291:5)
at Agent.addRequest (_http_agent.js:248:10)
at new ClientRequest (_http_client.js:296:16)
at Object.request (https.js:314:10)
at Request.start (/Users/********/Documents/*******/code/******/node_modules/request/request.js:751:32) {
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall:'read'
}
Also, my Node API in general had already been operating more or less seamlessly for several months of development before this problem arose. One thing was that changed on my machine was my company adding ScaleFusion device management software. I mentioned before the install that it could cause problems with the Node server API calls and the error only happened after ScaleFusion was added. Maybe it is possible the software is causing this problem?
Have been trying to find a fix for a couple of days now but running out of leads on it. If anyone can help out I'd really appreciate it. Thanks in advance for any suggestions.
I've read all the related issues on SO and GitHub about this error and none of them seem to address this situation.
When I run the following code:
response = await axios.get('http://localhost:8082/panda, {
httpAgent: new http.Agent({ keepAlive: true, keepAliveMsecs: 10000 })
});
I get the following error:
{ Error: socket hang up
at createHangUpError (_http_client.js:345:15)
at Socket.socketOnEnd (_http_client.js:437:23)
at emitNone (events.js:110:20)
at Socket.emit (events.js:207:7)
at endReadableNT (_stream_readable.js:1059:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9) code: 'ECONNRESET' }
{ Error: read ECONNRESET
at _errnoException (util.js:1019:11)
at TCP.onread (net.js:608:25)
code: 'ECONNRESET',
errno: 'ECONNRESET',
syscall: 'read',
etc...
There's no stack trace beyond the onread call so it's unclear how I can get any additional information beyond the ECONNRESET error code (-54) passed to the onread method in net.js
This issue happens with every request - it is not intermittent.
A few observations:
when making the same request from chrome or postman the request does NOT fail
attempting to reproduce a successful request from Chrome, by using the same headers, fails
setting the accept header to use 'gzip', etc.. does not help - I have tried all the recommendations including some weird ones like setting the content length and adding a body to the request despite this being a GET request
the error always appears in net.js, but happens with both the request and axios libraries
Here's the interesting part - I can only seem to reproduce this problem reliably when I am hosting the server locally. I am able to reach the dev instance or run the server in vagrant, via a docker container, without issue.
I am running macOS Sierra 10.12.6 if that makes any difference. The server is written in Java and uses the Spring framework. Here is the RestTemplate configuration I'm using for HTTP calls:
#Bean
public RestTemplate restTemplate() {
//request timeout
int timeout = 5000;
//Connection Pooling factory with timeouts to prevent disastrous request responses
HttpComponentsClientHttpRequestFactory cf = new HttpComponentsClientHttpRequestFactory();
cf.setReadTimeout(timeout);
cf.setConnectTimeout(timeout);
cf.setConnectionRequestTimeout(timeout);
return new RestTemplate(cf);
}
I've tried messing around with these settings with no luck.
Any ideas?
I have the same issue. The cause was in my proxy settings. I make proxy-port localhost:8080 to my localhost:3000. That's why all API requests from my NodeJS app to localhost:3000 returned "ECONNRESET" Error. When I delete proxy-port, the issue was resolved. Check your net settings. Maybe it's redirecting problem
This was a known issue with NodeJS and has been resolved. Please see http.Agent: idle sockets throw unhandled ECONNRESET for details.
Also ensure you have enabled the correct security settings when opening a port on MacOS
I am working on this project since long. I am facing some problem which I am unable to solve after trying really hard. I would really appreciate if you guys can help me. I am trying to connect to mongodb database using node.js but whenever I start server I get following error. And I have no idea how to solve that error.
{ [Error: Cannot find module '../build/Release/bson'] code: 'MODULE_NOT_FOUND' }
js-bson: Failed to load c++ bson extension, using pure JS version
The magic happens on port 8080
Connected correctly to server
events.js:85
throw er; // Unhandled 'error' event
^
Error: failed to connect to [undefined:27017]
at null.<anonymous> (C:\Users\Admin\Desktop\Apps\linked-up 0.0.1\easy-node-authentication-master\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\server.js:556:74)
at emit (events.js:118:17)
at null.<anonymous> (C:\Users\Admin\Desktop\Apps\linked-up 0.0.1\easy-node-authentication-master\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\connection_pool.js:156:15)
at emit (events.js:110:17)
at Socket.<anonymous> (C:\Users\Admin\Desktop\Apps\linked-up 0.0.1\easy-node-authentication-master\node_modules\mongoose\node_modules\mongodb\lib\mongodb\connection\connection.js:534:10)
at Socket.emit (events.js:107:17)
at net.js:950:16
at process._tickCallback (node.js:355:11)
Thank you for your time and consideration
Are you sure you are up and running mongodb locally in the system ?
To map local path to mongodb try the following command
[location of mongodb]/bin/ mongod --dbpath "specify location of database"
Do you write you code like below?
mongoose.connect(configDB.url); // connect to database
I guess you did't set your configDB.url .
set it with your db url like 'localhost:27017/myDatabase'
I have a simple iojs http server which communicates with another http backend on my development machine. Now my ip has changed and the backend request won't work due to a wrong ip. I have (or at least thought so) error management in place however the server crashes in some situations due to an unhandled exception:
When doing 2 subsequent requests the first one "hangs" and then the second request crashes the server:
events.js:141
throw er; // Unhandled 'error' event
^
Error: connect EHOSTDOWN 192.168.1.11:80 - Local (192.168.1.10:54125)
at Object.exports._errnoException (util.js:846:11)
at exports._exceptionWithHostPort (util.js:869:20)
at connect (net.js:840:14)
at lookupAndConnect (net.js:933:5)
at Socket.connect (net.js:902:5)
at Agent.exports.connect.exports.createConnection (net.js:61:35)
at Agent.createSocket (_http_agent.js:177:16)
at Agent.addRequest (_http_agent.js:147:23)
at new ClientRequest (_http_client.js:132:16)
at Object.exports.request (http.js:30:10)
The error in question won't trigger backRequest.on("error", errFn) and it is not a standard error as in function(err, response, body). It can't be catched using try...catch.
How can i catch and gracefully handle this error?
I have dugged a bit into net.js/http.js. When emitting a socket error manually:
backendRequest.on("socket", function(socket) {
socket.emit("error", "DIE!");
});
it is handled by the normal error handler. The EHOSTDOWN error however is not. The reason for this seems to be that the socket error handler is only installed on the next tick after Socket.connect() (see https://github.com/joyent/node/blob/master/lib/_http_client.js#L498).
"My" error however is triggered directly upon connect() (must have something to do with the first connection still trying to find the host).
I can gracefully catch that error using a custom Agent.createConnection:
var socket = net.createConnection(options);
// install error handler immediately
socket.on("error", function(err) {
console.log("AHA", err);
});
return socket;
however that seems very verbose and bulky. Is that truly the best way to do that? The error seems to be not that special and seems to be unhandled in the core libs. Why is that so and why is the socket error handler only installed upon nextTick?
How would you properly catch that error? Is the agent the correct way?
A "full" example to play with can be found here: http://pastebin.com/jaCUPaHX
As pointed out by the commenters: It was a bug https://github.com/nodejs/io.js/pull/2054
I get this error occasionally when running my node.js script.
events.js:66
throw arguments[1]; // Unhandled 'error' event
^
Error: write ECONNRESET
at errnoException (net.js:768:11)
at Object.afterWrite (net.js:592:19)
What causes this error? I read someplace that this is caused due to attempting to write data to a closed socket. Is that right?
If that is the case, How do I check if a socket connection is active?
I found this SO question here, but no one had answered there.
And thirdly, is a simple try catch around the socket.write statement, enough to handle this error? Or does it emit error events which I must handle?
we've seen it with http-proxy and the issue was reported here: https://github.com/nodejitsu/node-http-proxy/issues/331
nodejs would throw if there "error" (and only "error") event is not handled.