Firstly, let me preface by saying I am not a developer by any means. I have a tiny coding background which I'm attempting to rely on here, but its failing me.
My problem is as follows:
I have some code which is basically a bot that performs jobs on the ethereum network for smart contracts. The code is entirely in javascript so the blockchain aspect of what I'm attempting to do is immaterial. But basically I am trying to perform an API request which receives a number (called "gas") this number determines how much my bot is willing to pay in order to perform a job on the ethereum network. However, I've learned the default number the API sends is too low. So I've decided to try and increase the gas by multiplying the number gotten from the API request. However when I do that, it seems the code tried multiplying the number before it receives the API request. Consequently, I receive a NaN error. The original code looks like the following:
async work() {
this.txPending = true;
try {
const gas = await this.getGas();
const tx = await this.callWork(gas);
this.log.info(`Transaction hash: ${tx.hash}`);
const receipt = await tx.wait();
this.log.info(`Transaction confirmed in block ${receipt.blockNumber}`);
this.log.info(`Gas used: ${receipt.gasUsed.toString()}`);
} catch (error) {
this.log.error("While working:" + error);
}
this.txPending = false;
}
The changes I make look like the following:
async work() {
this.txPending = true;
try {
let temp_gas = await this.getGas();
const gas = temp_gas * 1.5;
const tx = await this.callWork(gas);
console.log(gas);
console.log('something');
const receipt = await tx.wait();
this.log.info(`Transaction confirmed in block ${receipt.blockNumber}`);
this.log.info(`Gas used: ${receipt.gasUsed.toString()}`);
} catch (error) {
this.log.error("While working:" + error);
}
this.txPending = false;
}
From my research, it seems what I need to do is create a promise function, but I am absolutely lost on how to go about creating this. If anyone could help me, I'd be willing to pay them a bit of ethereum maybe .2 (roughly $150).
Technically, you can not determine the gas cost of the transaction, but immediately set some rather large value for gas, for example, 0x400000. The miner will take from this amount as much as necessary, and leave the excess on your account. This is the difference between Ethereum and Bitcoin - in Bitcoin the entire specified fee will be taken by the miner.
Related
The following code:
const { JsonRpcProvider } = require("#ethersproject/providers")
const { Contract } = require("ethers")
const { Wallet } = require("#ethersproject/wallet");
const abi = require('./abi.json');
const GLOBAL_CONFIG = {
PPV2_ADDRESS: "0x18B2A687610328590Bc8F2e5fEdDe3b582A49cdA",
PRIVATE_KEY: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
BSC_RPC: "https://bsc-mainnet.public.blastapi.io"
};
const signer = new Wallet(GLOBAL_CONFIG.PRIVATE_KEY, new JsonRpcProvider(GLOBAL_CONFIG.BSC_RPC));
const contract = new Contract(GLOBAL_CONFIG.PPV2_ADDRESS, abi, signer)
const predictionContract = contract.connect(
signer
)
predictionContract.on("StartRound", async (epoch) => {
console.log("\nStarted Epoch", epoch.toString())
});
It has been working perfectly for months. However, last night it stopped. No new builds/code changes on my end. I've been trying everything I can think of but nothing. The signer seems to bring back my wallet details ok. I can also see all the functions on the predictionContract but can't get it to return the current Epoch value etc. As you've probably already noticed, I'm not much of a coder, so any help in understanding this would be amazing.
After some more time thinking, I noticed that the contract seemed pretty busy and decided to try a different RPC (I'd already tried 3 before). Seems to be working now, but not exactly scientific. Is there any way to monitor the time requests take and where the lag is? I had a design idea to test a multiple of RPCs during initiation and then use the fastest / most reliable, but no idea where to start with that!
I've been working on a Discord bot command to fetch all of the archived threads on a server. I'm currently running into the 100 limit and am looking for a way to get around that limit.
Here's a post I made previously and was able to get up to 100 threads fetched from that.
I found this post where they made a system to fetch more than 100 messages, but I haven't been able to successfully convert that into code to fetch more than 100 threads. Unfortunately, dates and orders have been a bit inconsistent when printing threads, so getting that data has been a challenge.
Here's the code that I have so far that's only fetching 100 threads:
client.on('messageCreate', async function (message) {
if(message.content === "!test"){
const guild = client.guilds.cache.get("GUILD_ID_HERE"); // number removed for privacy reasons
var textChannels = message.guild.channels.cache.filter(c => c.type === ChannelType.GuildText);
textChannels.forEach(async (channel, limit = 500) => {
let archivedThreads = await channel.threads.fetchArchived({limit: 100});
console.log(archivedThreads);
});
}
}
In my final code, I also am printing this to a text file rather than the console, but I've left that additional code out here to keep this code more simplified for debugging.
Any ideas on how I can iterate through and print multiple sets of 100 threads at a time to bypass the limitations?
The function passed to Array.forEach does not awaits, you may use for of operator or classic for loop inside an async function to await on threads for synchronous execution.
var textChannels = message.guild.channels.cache.filter(c => c.type === ChannelType.GuildText);
async function fetchArchives(){
for (const channel of textChannels) {
let archivedThreads = await channel.threads.fetchArchived({limit: 100});
console.log(archivedThreads);
}
}
I wrote a very simple typescript program, which does the following:
Transform users.csv into an array
For each element/user issue an API call to create that user on a 3rd party platform
Print any errors
The excel file has >160,000 rows and there is no way to create them all in one API call, so I wrote this program to run in the background of my computer for ~>20 hours.
The first time I ran this, the code stopped mid for loop without an exception or anything. So, I deleted the user rows from the csv file that were already uploaded and re-ran the code. Unfortunately, this kept happening.
Interestingly, the code has stopped at non-deterministic iterations, one time it was at i=812, another at i=27650, and so on.
This is the code:
const main = async () => {
const usersFile = await fsPromises.readFile("./users.csv", { encoding: "utf-8" });
const usersArr = makeArray(usersFile);
for (let i = 0; i < usersArr.length; i++) {
const [ userId, email ] = usersArr[i];
console.log(`uploading ${userId}. ${i}/${usersArr.length}`);
try {
await axios.post(/* create user */);
await sleep(150);
} catch (err) {
console.error(`Error uploading ${userId} -`, err.message);
}
}
};
main();
I should mention that exceptions are within the for-loop because many rows will fail to upload with a 400 error code. As such, I've preferred to have the code run non-stop and print any errors onto a file, so that I could later re-run it for the users that failed to upload. Otherwise I would have to check whether it halted because of an error every 10 minutes.
Why does this happen? and What can I do?
I run after compiling as: node build/index.js 2>>errors.txt
EDIT:
There is no code after main() and no code outside the try ... catch block within the loop. errors.txt only contains 400 errors. Even if it contained another run-time exception, it seems to me that this wouldn't/shouldn't halt execution, because it would execute catch and move on to the next iteration.
I think this may have been related to this post. The file I was reading was extremely large as noted, and it was saved into a runtime variable. Undeterministically, the OS could have decided that the memory demanded was too high. This is probably a situation to use a Readable Stream instead of a readFile.
I have developed an Actor+PuppeteerCrawler+Proxy based crawler and want to rescrape failed pages. To increase the chance for the rescrape, I want to switch to another proxyUrl. The idea is, to create a new crawler with a modified launchPupperteer function and a different proxyUrl, and re-enque the failed pages. Please check the sample code below.
But unfortunately, it doesn't work, although I reset the request queue by using drop and reopening. Is it possible to rescraped failed pages by using PuppeteerCrawler with a different proxyUrl and how?
Best regards,
Wolfgang
for(let retryCount = 0; retryCount <= MAX_RETRY_COUNT; retryCount++){
if(retryCount){
// Try to reset the request queue, so that failed request shell be rescraped
await requestQueue.drop();
requestQueue = await Apify.openRequestQueue(); // this is necessary to avoid exceptions
// Re-enqueue failed urls in array failedUrls >>> ignored although using drop() and reopening request queue!!!
for(let failedUrl of failedUrls){
await requestQueue.addRequest({url: failedUrl});
}
}
crawlerOptions.launchPuppeteerFunction = () => {
return Apify.launchPuppeteer({
// generates a new proxy url and adds it to a new launchPuppeteer function
proxyUrl: createProxyUrl()
});
};
let crawler = new Apify.PuppeteerCrawler(crawlerOptions);
await crawler.run();
}
I think your approach should work but on the other hand it should not be necessary. I'm not sure what createProxyUrl does.
You can supply a generic proxy URL with auto username which will use all your datacenter proxies at Apify. Or you can provide proxyUrls directly to PuppeteerCrawler.
Just don't forget that you have to switch browser to get a new IP from the proxy. More in this article - https://help.apify.com/en/articles/2190650-how-to-handle-blocked-requests-in-puppeteercrawler
I'm building a very simple scraper to get the 'now playing' info from an online radio station I like to listen too.
It's stored in a simple p element on their site:
data html location
Now using the standard apify/web-scraper I run into a strange issue. The scraping sometimes works, but sometimes doesn't using this code:
async function pageFunction(context) {
const { request, log, jQuery } = context;
const $ = jQuery;
const nowPlaying = $('p.js-playing-now').text();
return {
nowPlaying
};
}
If the scraper works I get this result:
[{"nowPlaying": "Hangover Hotline - hosted by Lamebrane"}]
But if it doesn't I get this:
[{"nowPlaying": ""}]
And there is only a 5 minute difference between the two scrapes. The website doesn't change, the data is always presented in the same way. I tried checking all the boxes to circumvent security and different mixes of options (Use Chrome, Use Stealth, Ignore SSL errors, Ignore CORS and CSP) but that doesn't seem to fix it unfortunately.
Scraping instable
Any suggestions on how I can get this scraping task to constantly return the data I need?
It would be great if you can attach the URL, it will help me to find out the problem.
With the information you provided, I guess that the data you want to are loaded asynchronously. You can use context.waitFor() function.
async function pageFunction(context) {
const { request, log, jQuery } = context;
const $ = jQuery;
await context.waitFor(() => !!$('p.js-playing-now').text());
const nowPlaying = $('p.js-playing-now').text();
return {
nowPlaying
};
}
You can pass the function to wait, and I will wait until the result of the function will be true. You can check the doc.