Error in mineflayer: Unsupported brand channel name - javascript

I'm trying to make a mineflayer bot but it seems to keep sending the same error. Here is the code:
const mineflayer = require('mineflayer')
const bot = mineflayer.createBot({
host: 'localhost',
port: ,
username: 'Test_Bot'
})
And here is the error:
Error: Unsupported brand channel name
at getBrandCustomChannelName (/Users/jeonghunchae/Desktop/Bots/node_modules/mineflayer/lib/plugins/game.js:22:11)
at inject (/Users/jeonghunchae/Desktop/Bots/node_modules/mineflayer/lib/plugins/game.js:67:24)
at /Users/jeonghunchae/Desktop/Bots/node_modules/mineflayer/lib/plugin_loader.js:41:7
at Array.forEach (<anonymous>)
at injectPlugins (/Users/jeonghunchae/Desktop/Bots/node_modules/mineflayer/lib/plugin_loader.js:40:16)
at EventEmitter.onInjectAllowed (/Users/jeonghunchae/Desktop/Bots/node_modules/mineflayer/lib/plugin_loader.js:12:5)
at Object.onceWrapper (node:events:627:28)
at EventEmitter.emit (node:events:513:28)
at Client.next (/Users/jeonghunchae/Desktop/Bots/node_modules/mineflayer/lib/loader.js:124:9)
at Object.onceWrapper (node:events:627:28)
jeonghunchae#2015017 Bots %
Is there any way how I can fix this error?
First I tried to make a lan server in my world and it sent this error. Then I created a private server with online-mod: false but it still sent the same error.

I had the same Problem.
After a lot of research I found that I was not sure if my minecraft Version(1.19.2) was supported by Mineflayer.
After switching to 1.19 the error Message was gone.

Related

Mongodb connection suddenly stopped working with "Bad Auth" error

So mongoose was working just fine for more than 2 months, but suddenly crashed with this error
/home/container/node_modules/mongodb/lib/cmap/connection.js:210
callback(new error_1.MongoServerError(document));
^
MongoServerError: bad auth : Authentication failed.
at Connection.onMessage (/home/container/node_modules/mongodb/lib/cmap/connection.js:210:30)
at MessageStream.<anonymous> (/home/container/node_modules/mongodb/lib/cmap/connection.js:63:60)
at MessageStream.emit (node:events:527:28)
at processIncomingData (/home/container/node_modules/mongodb/lib/cmap/message_stream.js:132:20)
at MessageStream._write (/home/container/node_modules/mongodb/lib/cmap/message_stream.js:33:9)
at writeOrBuffer (node:internal/streams/writable:390:12)
at _write (node:internal/streams/writable:331:10)
at MessageStream.Writable.write (node:internal/streams/writable:335:10)
at TLSSocket.ondata (node:internal/streams/readable:777:22)
at TLSSocket.emit (node:events:527:28) {
ok: 0,
code: 8000,
codeName: 'AtlasError',
[Symbol(errorLabels)]: Set(1) { 'HandshakeError' }
}
Anyone knows what is the issue?
My IP is configured as access from anywhere correctly
Using connection to application, my uri is well configured and copy pasted to my project, and again the connection was perfect for 2months but stopped out of nowhere

Error while connecting to local Redis server in Node.js

I have Redis server running on localhost at port 6379 and I'm connecting to it like that:
const redisClient = redis.createClient({
url: `redis://localhost:6379`,
legacyMode: false
});
Although when I try to connect to it, it gives the following error:
TypeError [ERR_INVALID_ARG_TYPE]: The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received undefined
at new NodeError (node:internal/errors:371:5)
at _write (node:internal/streams/writable:312:13)
at WriteStream.Writable.write (node:internal/streams/writable:334:10)
at Commander.<anonymous> (C:\Users\atsukoro\Desktop\Projekty\simpleanilist\src\index.ts:42:24)
at Commander.emit (node:events:526:28)
at Commander.emit (node:domain:475:12)
at RedisSocket.<anonymous> (C:\Users\atsukoro\Desktop\Projekty\simpleanilist\node_modules\#redis\client\dist\lib\client\index.js:347:35)
at RedisSocket.emit (node:events:526:28)
at RedisSocket.emit (node:domain:475:12)
at RedisSocket._RedisSocket_connect (C:\Users\atsukoro\Desktop\Projekty\simpleanilist\node_modules\#redis\client\dist\lib\client\socket.js:114:14) {
code: 'ERR_INVALID_ARG_TYPE'
}
I found that legacyMode false will fix that issue but that didn't work either.

DiscordJS Error POST-FAILED Code: 50001 remove error message

im owning a Discord bot for music and stuff.
My console always gets spammed full with this message:
20:2:2022 - 17:22 | Info: [Slash Command]: [POST-FAILED] Guild 945348402374385736, Command: youtube
DiscordAPIError: Missing Access
at RequestHandler.execute (/home/m9mo/node_modules/discord.js/src/rest/RequestHandler.js:154:13)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async RequestHandler.push (/home/m9mo/node_modules/discord.js/src/rest/RequestHandler.js:39:14)
at async /home/m9mo/util/RegisterSlashCommands.js:36:9 {
method: 'post',
path: '/applications/895283932252241940/guilds/945348402374385736/commands',
code: 50001,
httpStatus: 403
}
I know the reason for this error but some people are creating their own invite link using the id of the bot...
Is there any way to disable this message?
You could try finding the error code (In this case 50001) of that specific error, then when you receive an error, stop it from being logged. You can find all the relevant error codes here: https://discord.com/developers/docs/topics/opcodes-and-status-codes#json-json-error-codes
You might also want to consider sending some kind of message like I'm missing permission to do...

Why does ioredis client timeout when keep-alive is enabled?

I have been getting the following error, when my script stays idle for sometime. I cannot understand the reason for this.
error: [ioredis] Unhandled error event:
error: Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:111:27)
error: [ioredis] Unhandled error event
error: Error: read ETIMEDOUT
at TCP.onStreamRead (internal/stream_base_commons.js:111:27)
I initialize my redis client as :
let redis = require("ioredis");
redis = Promise.promisifyAll(redis);
const redis = new redis({
host: "my hostname",
port: 6379,
password: "some password"
});
and I am using ioredis client.
Does anyone know the reason for this? The keep-alive is already enabled by default as suggested here https://github.com/luin/ioredis/blob/master/API.md
I want the client to never timeout and reconnect if the timeout occurs. I am using Redis service by azure.
We have an entire document that covers this topic: Troubleshoot Azure Cache for Redis timeouts
If using the StackExchange.Redis Client the best practice of using the following pattern is suggested:
private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
{
return ConnectionMultiplexer.Connect("cachename.redis.cache.windows.net,abortConnect=false,ssl=true,password=...");
});
public static ConnectionMultiplexer Connection
{
get
{
return lazyConnection.Value;
}
}
In the case of ioredis, you can set a client property: [options.lazyConnect]
You will also want to look at any retry methods available with your client. I hope this helps.

NodeJS Express Request Entity Too Large

I've tried many solutions listed here (increasing of memory limit, adding parameterlimit, adding type as 'application/json') to fix this 'Request Entity too large' error (it also returns http code 413). But none of them seem to make this error go away.
The current size of the json can range from 200k up to 400k entities.
Here is the current configuration:
app.use( bodyParser.json({limit: "15360mb", type:'application/json'}) );
app.use(bodyParser.urlencoded({
limit: "15360mb",
extended: true,
parameterLimit:5000000,
type:'application/json'
}));
app.use(bodyParser())
Any ideas on how to increase the limit?
More information
This is the full error message if it helps:
{ Error: Request Entity Too Large
at respond (/home/nodejs/server/node_modules/elasticsearch/src/lib/transport.js:307:15)
at checkRespForFailure (/home/nodejs/server/node_modules/elasticsearch/src/lib/transport.js:266:7)
at HttpConnector.<anonymous> (/home/nodejs/server/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)
at IncomingMessage.bound (/home/nodejs/server/node_modules/lodash/dist/lodash.js:729:21)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1056:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
status: 413,
displayName: 'RequestEntityTooLarge',
message: 'Request Entity Too Large',
path: '/_bulk',
query: {},
body: '<large body here>'
}
Solution
The error was indeed due to wrong configuration on elasticsearch and not nodejs. Manage to fix this by following https://github.com/elastic/elasticsearch-js/issues/241 and setting http.max_content_length: 500mb in elasticsearch.yml.
#ryanlutgen also provided a link for more information about this error here.
https://github.com/elastic/elasticsearch/issues/2902
This issue has been fixed. Thanks for all the input!
If You doesn't solve your issue, bodyParser also has a limit.
You must use
app.use(bodyParser({limit: '4MB'}))

Categories

Resources