I've been working on a Discord bot command to fetch all of the archived threads on a server. I'm currently running into the 100 limit and am looking for a way to get around that limit.
Here's a post I made previously and was able to get up to 100 threads fetched from that.
I found this post where they made a system to fetch more than 100 messages, but I haven't been able to successfully convert that into code to fetch more than 100 threads. Unfortunately, dates and orders have been a bit inconsistent when printing threads, so getting that data has been a challenge.
Here's the code that I have so far that's only fetching 100 threads:
client.on('messageCreate', async function (message) {
if(message.content === "!test"){
const guild = client.guilds.cache.get("GUILD_ID_HERE"); // number removed for privacy reasons
var textChannels = message.guild.channels.cache.filter(c => c.type === ChannelType.GuildText);
textChannels.forEach(async (channel, limit = 500) => {
let archivedThreads = await channel.threads.fetchArchived({limit: 100});
console.log(archivedThreads);
});
}
}
In my final code, I also am printing this to a text file rather than the console, but I've left that additional code out here to keep this code more simplified for debugging.
Any ideas on how I can iterate through and print multiple sets of 100 threads at a time to bypass the limitations?
The function passed to Array.forEach does not awaits, you may use for of operator or classic for loop inside an async function to await on threads for synchronous execution.
var textChannels = message.guild.channels.cache.filter(c => c.type === ChannelType.GuildText);
async function fetchArchives(){
for (const channel of textChannels) {
let archivedThreads = await channel.threads.fetchArchived({limit: 100});
console.log(archivedThreads);
}
}
Related
I have a simple update function which sometimes executes really slow compared to other times.
// Five executions with very different execution times
Finished in 1068ms // Almost six times slower than the next execution
Finished in 184ms
Finished in 175ms
Finished in 854ms
Finished in 234ms
The Function is triggered from the frontend and doesn't run on Firebase Cloud Functions.
const startAt = performance.now()
const db = firebase.firestore();
const ref = db.doc(`random/nested/document/${id}`);
ref.update({
na: boolean // a calculated boolean with array.includes(...)
? firebase.firestore.FieldValue.arrayRemove(referenceId)
: firebase.firestore.FieldValue.arrayUnion(referenceId)
})
.then(() => {
let endAt = performance.now();
console.log("Finished in " + (endAt - startAt) + "ms");
});
Is there anything I can improve to fix these performance differences?
Also the longer execution times dont only appear when removing something from an array or adding something to an array. It appears on adding and removing. Sometimes these execution times go up to 3000ms.
Similar to cold-starting a Cloud Function where everything is spun up, initialized and made ready for use, a connection to Cloud Firestore also needs to be resolved through DNS, ID tokens need to be obtained to authenticate the request, a socket to the server opened and any handshakes are exchanged between the server and the SDK.
Any new operations on the database can make use of the previous work taken to initialize the connection and that is why they look like they are faster.
Showing this as loose pseudocode:
let connection = undefined;
function initConnectionToFirestore() {
if (!connection) {
await loadFirebaseConfig();
await Promise.all([
resolveIpAddressOfFirebaseAuth(),
resolveIpAddressOfFirestoreInstance()
]);
await getIdTokenFromFirebaseAuth();
await addAuthenticationToRequest();
connection = await openConnection();
}
return connection;
}
function doUpdate(...args) {
const connection = await initConnectionToFirestore();
// do the work
connection.send(/* ... */);
}
await doUpdate() // has to do work of initConnectionToFirestore
await doUpdate() // reuses previous work
await doUpdate() // reuses previous work
await doUpdate() // reuses previous work
Firstly, let me preface by saying I am not a developer by any means. I have a tiny coding background which I'm attempting to rely on here, but its failing me.
My problem is as follows:
I have some code which is basically a bot that performs jobs on the ethereum network for smart contracts. The code is entirely in javascript so the blockchain aspect of what I'm attempting to do is immaterial. But basically I am trying to perform an API request which receives a number (called "gas") this number determines how much my bot is willing to pay in order to perform a job on the ethereum network. However, I've learned the default number the API sends is too low. So I've decided to try and increase the gas by multiplying the number gotten from the API request. However when I do that, it seems the code tried multiplying the number before it receives the API request. Consequently, I receive a NaN error. The original code looks like the following:
async work() {
this.txPending = true;
try {
const gas = await this.getGas();
const tx = await this.callWork(gas);
this.log.info(`Transaction hash: ${tx.hash}`);
const receipt = await tx.wait();
this.log.info(`Transaction confirmed in block ${receipt.blockNumber}`);
this.log.info(`Gas used: ${receipt.gasUsed.toString()}`);
} catch (error) {
this.log.error("While working:" + error);
}
this.txPending = false;
}
The changes I make look like the following:
async work() {
this.txPending = true;
try {
let temp_gas = await this.getGas();
const gas = temp_gas * 1.5;
const tx = await this.callWork(gas);
console.log(gas);
console.log('something');
const receipt = await tx.wait();
this.log.info(`Transaction confirmed in block ${receipt.blockNumber}`);
this.log.info(`Gas used: ${receipt.gasUsed.toString()}`);
} catch (error) {
this.log.error("While working:" + error);
}
this.txPending = false;
}
From my research, it seems what I need to do is create a promise function, but I am absolutely lost on how to go about creating this. If anyone could help me, I'd be willing to pay them a bit of ethereum maybe .2 (roughly $150).
Technically, you can not determine the gas cost of the transaction, but immediately set some rather large value for gas, for example, 0x400000. The miner will take from this amount as much as necessary, and leave the excess on your account. This is the difference between Ethereum and Bitcoin - in Bitcoin the entire specified fee will be taken by the miner.
My bot got added to a 4.9k server and everytime i ran the "zbotstats" or restarted the console it didn't add those 4.9k members what is the issue here?
Before the 4.9k server was added it had 3.8k which shouldve added up to 8.7k
My code for guild user count:
console.log(`${bot.user.username} is online on ${bot.guilds.cache.size} servers!`);
console.log(`lovell is looking over ${bot.users.cache.size} users`)
Using: Discord.v12
This is because bot.users.cache.size is the number of cached users. And not all the users of all the guilds are cached. So, how to fix?
First way: cache ALL the members
You can use the following code to force discord.js to cache all the members in all the servers:
const bot = new Discord.Client({ fetchAllMembers: true });
bot.on('ready', () => {
console.log(client.guilds.cache.size); // right count
});
You will see that bot.users.cache.size will be the right count. Note that this way will cost a lot of RAM if your bot starts to be on a lot of big servers.
Second way: use guild.memberCount instead of the cache
You can use:
const users = bot.guilds.cache.map((guild) => guild.memberCount).reduce((p, c) => p + c);
console.log(users); // right count
I'm building a Blog API using Nodejs and my data coming from a scraping service that scraped data from multiples news websites live, so there's no database.
The scraping service takes around 30 seconds to return a response for page 1 of all sites I scraping with. ( Imagin with me how pagination will be looks like in my app :( )
If you don't know what scaping is just thinking of it as multiple APIs
and I get data from each one then combine all in one results array.
So because of the long response time, I start using the node-cache package for caching and it saves my request time from 30 seconds to 6 milliseconds ( Wooow right? )
The problem is when my cache gets expired after x time, I need to wait for a random user to hit my endpoint again to regenerate the cache again with the new data and he will wait for the whole 30 seconds until he gets a response.
I need to avoid that as much as I could, so any Ideas? I have searched a lot and not getting any useful results!!, All articles talk about how to cache not techniques.
#Update
I have found kinda a solution the package I'm using for caching provided in their API Documentation an event called cache.on('expired', cb) means I can listen to any cache get expired.
What I have done is kinda an endless loop making the request to my self every time a cache get expired
The code
class MyScraperService {
constructor() {
this.cache = new NodeCache({ stdTTL: 30, checkperiod: 5, useClones: false });
this.cache.on('expired', (key: string, data: Article[]) => {
console.log('key: ', key);
// send a request to get all my articless again and again once the cahce get expires
this.articles( key.charAt( key.length -1 ) ); // page number
});
}
async articles(page: string): Promise<Article[]> {
// nodeCache()
if (this.cache.get(`articles_page_${page}`)) {
let all: Article[] = this.cache.get(`articles_page_${page}`); //.sort(() => Math.random() - 0.5);
return all.sort(() => Math.random() - 0.5);
}
let artilces: any = await Promise.all([
this.xxScraper(page),
this.xxScraper(page),
this.xxScraper(page),
this.xxScraper(page),
this.xxScraper(page),
this.xxScraper(page),
this.xxScraper(page),
this.xxScraper(page),
this.xxScraper(page),
this.xxScraper(page)
]);
let all: Article[] = [];
for (let i = 0; i < artilces.length; i++) {
const article = artilces[i];
all.push(...article);
}
this.cache.set(`articles_page_${page}`, all);
all = all.sort(() => Math.random() - 0.5);
return all;
}
}
You might be able to schedule a cronjob to call your scraper every [cachingTime-scrapExecutionTime](in this case, 30) seconds with cron.
I would also suggest you to increase the caching request to above 1m, which will divide the number of requests to the other websites.
I'm working in a project where I need to make requests over an API. The requests return data about a support ticket, but the problem is that i have about 500 tickets to get data about and each one requires one request. To speed up the requests, i tried to build a async routine that generate many requests at the same time. But, since the API that i'm integrating with has a rate limiter of 10 requests per second, some of the routines get the answer "Limit Exceed". If I make the requests sequentially, it's take about 5 minutes.
That way, someone has a tip for me in that task? I tried some solutions like rate-limiter of NodeJS, but it just generate 10 requests simultaneously, and didn't give any kind of error treatment or retry if the request fail.
About the language, it not have restriction, the project is written in NodeJS but have some python code too and didn't have problem to integrate another language.
Something like this isn't too difficult to create yourself, and it'd give you the flexibility you need.
There are fancy ways like tracking the start and completion time of each and checking if you've sent 10 in the second.
The system probably also limits it to 10 active requests going (i.e., you can't spin up 100 requests, 10 each second, and let them all process).
If you assume this, I'd say launch 10 all at once, then let them complete, then launch the next batch. You could also launch 10, then start 1 additional each time one finishes. You could think of this like a "thread pool".
You can easily track this with a simple variable tracking how many calls are going. Then, just check how many calls are going once a second (to avoid the 1 second limit) and if you have available "threads", fire off that many more new requests.
It could look something like this:
const threadLimit = 10;
const rateLimit = 1000; // ms
let activeThreads = 0;
const calls = new Array(100).fill(1).map((_, index) => index); // create an array 0 through 99 just for an example
function run() {
if (calls.length == 0) {
console.log('complete');
return;
}
// threadLimit - activeThreads is how many new threads we can start
for (let i = 0; i < threadLimit - activeThreads && calls.length > 0; i++) {
activeThreads++; // add a thread
call(calls.shift())
.then(done);
}
setInterval(run, rateLimit);
}
function done(val) {
console.log(`Done ${val}`);
activeThreads--; // remove a thread
}
function call(val) {
console.log(`Starting ${val}`);
return new Promise(resolve => waitToFinish(resolve, val));
}
// random function to simulate a network call
function waitToFinish(resolve, val) {
const done = Math.random() < .1; // 10% chance to finish
done && resolve(val)
if (!done) setInterval(() => waitToFinish(resolve, val), 10);
return done;
}
run();
Basically, run() just starts up however many new threads it can, based on the limit and how many are done. Then, it just repeats the process every second, adding new ones as it can.
You might need to play with the threadLimit and rateLimit values, as most rate limiting systems don't actually let you go up right to the limit and don't release it as soon as it's done.