I've made a lot of bots, hosted some on my personal laptop, and some on Heroku, but in both, I received this error that terminated node.js, so I used bot.on('error', console.error) to view the error and here's the result:
type: 'error', message: 'read ECONNRESET', error: {
Error: read ECONNRESET at TLSWrap.onStreamRead(internal / stream_base_commons.js: 111: 27) errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read'
}
If anyone knows how to stop that from happening, please tell me.
"ECONNRESET" usually happens when another end of the TCP connections closes its end due to any protocol-related errors and since no one is listening to the 'error' event it gets thrown, to deal with it you should put a listener which can handle such erroneous condition.
You can refer to such exception handling here node-js-best-practice-exception-handling
Related
I am suddenly getting this error & there is no way I am able to trace the issue, my server keeps crashing again & again, and I haven’t done any changes in months for sure.
events.js:288
throw er; // Unhandled 'error' event
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:205:27)
Emitted 'error' event on Socket instance at:
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read'
}
Anyone facing this issue or have solution to the crash?
I tried to comment as much code as possible to avoid this issue but no luck.
Can anyone help me if this error was faced by anyone?
This problem could be due to your connection,
may be you lost your internet connection or from your server so check your server is that running properly or not
I have run into this error with my Node server when trying to access a Filemaker Server driven backend both locally and remotely.
HTTP 6369: SOCKET ERROR: read ECONNRESET Error: read ECONNRESET at TLSWrap.onStreamRead (internal/stream_base_commons.js:205:27)
After trying everything I can find, triple-checking my FMS settings and logs, restarting the server and the machine several times, I cannot figure out what exactly is going wrong and how to solve it.
I have seen some answers to similar questions here on SO, but for the most part they just suggest catching the error in the express route (has not been changed for months and was working fine before the error), which I have done but it just outputs a more verbose version of the main error above:
HTTP 6369: SOCKET ERROR: read ECONNRESET Error: read ECONNRESET
at TLSWrap.onStreamRead (internal/stream_base_commons.js:205:27)
at TLSSocket.Readable.on (_stream_readable.js:838:35)
at tickOnSocket (_http_client.js:691:10)
at onSocketNT (_http_client.js:744:5)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
at ClientRequest.onSocket (_http_client.js:732:11)
at setRequestSocket (_http_agent.js:436:7)
at handleSocketCreation_Inner (_http_agent.js:429:7)
at oncreate (_http_agent.js:286:5)
at Agent.createSocket (_http_agent.js:291:5)
at Agent.addRequest (_http_agent.js:248:10)
at new ClientRequest (_http_client.js:296:16)
at Object.request (https.js:314:10)
at Request.start (/Users/******/Documents/*****/code/******/node_modules/request/request.js:751:32)
THE ERR : Error: read ECONNRESET
at TLSWrap.onStreamRead (internal/stream_base_commons.js:205:27)
at TLSSocket.Readable.on (_stream_readable.js:838:35)
at tickOnSocket (_http_client.js:691:10)
at onSocketNT (_http_client.js:744:5)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
---------------------------------------------
at ClientRequest.onSocket (_http_client.js:732:11)
at setRequestSocket (_http_agent.js:436:7)
at handleSocketCreation_Inner (_http_agent.js:429:7)
at oncreate (_http_agent.js:286:5)
at Agent.createSocket (_http_agent.js:291:5)
at Agent.addRequest (_http_agent.js:248:10)
at new ClientRequest (_http_client.js:296:16)
at Object.request (https.js:314:10)
at Request.start (/Users/********/Documents/*******/code/******/node_modules/request/request.js:751:32) {
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall:'read'
}
Also, my Node API in general had already been operating more or less seamlessly for several months of development before this problem arose. One thing was that changed on my machine was my company adding ScaleFusion device management software. I mentioned before the install that it could cause problems with the Node server API calls and the error only happened after ScaleFusion was added. Maybe it is possible the software is causing this problem?
Have been trying to find a fix for a couple of days now but running out of leads on it. If anyone can help out I'd really appreciate it. Thanks in advance for any suggestions.
This error randomly appears When leaving my discord bot online for too long, around 3-4 hours but sometimes the error occurs sooner and sometimes later. It's really bothering me.
events.js:188
throw err;
^
Error: Unhandled "error" event. ([object Object])
at Client.emit (events.js:186:19)
at WebSocketConnection.onError (D:\BasementMonster\node_modules\discord.js\src\client\websocket\WebSocketConnection.js:374:17)
at WebSocket.onError (D:\BasementMonster\node_modules\ws\lib\event-target.js:128:16)
at emitOne (events.js:116:13)
at WebSocket.emit (events.js:211:7)
at _receiver.cleanup (D:\BasementMonster\node_modules\ws\lib\websocket.js:211:14)
at Receiver.cleanup (D:\BasementMonster\node_modules\ws\lib\receiver.js:557:13)
at WebSocket.finalize (D:\BasementMonster\node_modules\ws\lib\websocket.js:206:20)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
I wrestled with this problem for a while in my own code. The main problem is that the trace is completely unhelpful, and that the error occurs so infrequently as to make "run it in the terminal and wait" a futile task. Eventually, I was able to figure out that the Discord.js client itself was throwing an error -- this wasn't mentioned in any of the documentation I read, so I had no handler for it -- and since this oopsie happens in the Discord.js package itself, there was no line in my code for it to point to in the trace.
What needs to exist in the code somewhere is something along the lines of
client.on('error', (err) => {
console.log(err.message)
});
This is adjusted, of course, for whatever your Client object is, and whatever you want the error handling to be.
This specific trace, as far as I can tell, comes from losing Internet connection, which is something I want to have a record of, so instead of console.log() I called my own custom function that writes the event out to a logfile with a timestamp.
Testing this by running an instance of the bot and killing my Internet connection revealed that not only did the logging function work, but that Discord.js automatically restored the bot's session after it came back. (Your mileage may vary.)
You need to add an error handling event, like this:client.on('error', console.error);
I am using Requests js to make http requests. My script makes a GET request every 1 minute to an url that is secured over SSL. For 3 hour, everything works totally ( It made over 150 requests successfully). Suddenly, my node js app crashed and printed this error:
SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac:s3_pkt.c:535
at _errnoException (util.js:1022:11)
at WriteWrap.afterWrite [as oncomplete] (net.js:880:14) code: 'EPROTO', errno: 'EPROTO', syscall: 'write' }
The error stack trace pointed me to an error in request js result.
What does the above error mean? How to catch it and prevent it to happen again in the future?
I'm using Kue to create jobs for my MEAN.js app. If the application is idle for some time the Redis connection is closed, apparently Kue is trying to process jobs while the connection is closed, and I get some errors.
I'm watching for stuck jobs every 6 seconds, but this doesn't seem to help avoid the errors.
app.jobs.watchStuckJobs(1000 * 6);
These are the errors I'm getting, for each job I'm processing, after the connection is closed and before the connection is restored:
ERROR: { [Error: Redis connection to XXX failed - read ETIMEDOUT] code: 'ETIMEDOUT', errno: 'ETIMEDOUT', syscall: 'read' }
ERROR: { [AbortError: Redis connection lost and command aborted. It might have been processed.]
code: 'UNCERTAIN_STATE',
command: 'BLPOP',
args: [ 'q:send-email-invitations:jobs', 0 ],
origin: { [Error: Redis connection to XXX failed - read ETIMEDOUT] code: 'ETIMEDOUT', errno: 'ETIMEDOUT', syscall: 'read' } }
I've been reading the Kue documentation for stuck jobs, but the solution they recommend is using Domains, which for the Node version I'm using is deprecated; using promises or binding the error to uncaughtException, which will lose the error context.
What's the best approach in this case, so I don't lose the error context, and I can trace what's happening with the jobs?
If I have to choose one of these options, which is the best I can choose and why?
Is there any Redis configuration or anything outside Kue that I need to be aware of?