I have a program that makes a lot of fql and graph calls and I'm not sure how to handle when there's a 'get' error or a 'post' error. How do I get it to retry? I'm still new to this stuff but could I use some sort of try catch block? if so, how do I structure it?
I guess this could be extended to any get timeout in javascript.
Thanks
Querying Facebook Graph API can be a real pain. First of all, I try to chop my queries in batches. For instance, if I want all of my friends' posts, I first get an array of all my friends, then create queries for their posts for 10 friends each and send them off to facebook. What I do is with every reponse is test if there's an error in the response and if so I restart the function that generates the batches and sends them. I use one counter that keeps track of the amount of queries send out. If I send out 10 queries and I'm getting only 9, I restart the function after 30 seconds again. I use another counter that after 3 retries I show an error to the user...
Related
I currently have a script that update datas on a website using api calls.
I loop through all the products in my database and I do my api calls to update the data on the website.
Sometimes an error occurs in the middle of the script execution so I'm not able to complete all the updates since I can't go through all the products in the database.
And when I restart my script it always start from the beginning (first products in the database) so I'm never completing my bulk update .
How can I continue updates after an error occurred ?
For example, if error occurred at the product number 10, how can I keep doing the updates for the product number 11 without restarting my script from the beginning ? Is there a mongoDb function that tracks errors ?
I'm a beginner and I'm trying to understand the logics around bulk updates. I've thought about saving the ID of the products where the error occurred and then start updating after this ID. But, I'm sure there's a standard procedure for managing bulk updates error and failures.
I'm using javascript, nodeJS and cron for my script.
When you bulkupdate, you receive errors in BulkWriteError and if the operation is ordered, it will be stopped executing.
First of all, divide your updates into some batches may be 1000 or less or more depending upon how many in total you have.
Secondly, running the batch updates without order makes performance improvements as the next update doesn't wait for the current one to finish.
db.collection.bulkWrite(
[
{ insertOne : <document> },
{ updateOne : <document> },
{ updateMany : <document> },
],
{ ordered : false }
)
Thirdly, check for errors for the ids and push them to the next batch that you create or keep all the errors in one batch and run once you complete all the bulk updates.
We are following the same approach for more than 100k records and it works well today. If you or anyone else have a better solution I would definitely hear it and implement it.
I want my discordbot to send send a message with an attached file in it and a text. Then the bot has to edit this text a couple of times but the problem is that when bot eddits message 5 times then it waits some time and then edits again 5 times etc etc. How can i make it edit messages without stopping?
if(msg.content.includes("letter")){
msg.channel.send("alphabet", { files: ["/Users/48602/Videos/discordbot/aaa.png"]})}
if(msg.content === 'alphabet'){
msg.edit("**a**")
msg.edit("**b**")
msg.edit("**c**")
msg.edit("**d**") // Here bot stop for a 2 seconds and i dont know why
msg.edit("**e**")
msg.edit("**f**")
msg.edit("**g**")
msg.edit("**h**")
msg.edit("**i**")
msg.edit("**j**")// Here bot stop for a 2 seconds and i dont know why
msg.edit("**k**")
msg.edit("**l**")
msg.edit("**m**")
msg.edit("**n**")
msg.edit("**o**") // Here bot stop for a 2 seconds and i dont know why
msg.delete()
}
Discord has a rate limit of 5 in each request. Trying to bypass this would be considered API abuse (the solutions later is not API abuse).
Exceeding this limit will pause other requests until a certain number of seconds has passed. Along with my research, I came across this simple explanation:
5 anything per 5 seconds per server (if you did not understand what I said above).
On Discord's Developer guide on rate limits, it tells you this:
There is currently a single exception to the above rule [rate limits] regarding different HTTP methods sharing the same rate limit, and that is for the deletion of messages. Deleting messages falls under a separate, higher rate limit so that bots are able to more quickly delete content from channels (which is useful for moderation bots).
One workaround, without API abusing, would be to send messages, and delete the previous messages since there is a higher limit for deleting messages.
Another workaround would be to add intermediate timeouts to your animation.
A simple method such as:
function async wait = { require("util").promisify(setTimeout); };
//syntax: await wait(1000); to "pause" for 1 second
You will need to play around with the timings so it fits your intended animation speed, and without pausing due to the rate limit.
If I will use standart link to parse Steam user inventory (https://steamcommunity.com/profiles/{ steamid } /inventory/json/730/2) more than 2-3 times per minute, I will get banned from Steam api to 5 mins. How can I parse it without bans? Using node.js.
Store the result you get from your first request, and re-use it instead of re-querying Steam every time you want to read the data.
pseudocode:
if(!cache)
getDataFromSteam()
saveDataToCache()
else
getDataFromCache()
If you send too many requests you are sending too many requests. That is a fact you have to accept and work with that. I know from testing on my own that you are limited to 200 calls in 5 minutes. You have several options:
Implement a routine that limits the calls to 200 in 5 minutes and not more in order to avoid getting banned.
Cache the calls you make in order to avoid duplicate calls for one user.
Use other IP adresses as mentioned by #Dandavis.
I have roughly 1.5 million users I need to import into my parse _User table (we are migrating from our old sql db to Parse).
I've tried using a Background Job, but it is slow and gets killed every 15 minutes. As a result, we have only import ~300K users in 4 days.
So I decided to go the REST api route...and tried the PythonPy lib. It works, except for being able to do batch creation of users. From digging more into this, it seems that the REST api forces you to do a User.signup() call.
This of course makes the whole thing be serialized.
Is there anyway to do batch user creation?
Batch creating / signing up User objects is not supported in Parse. It is mentioned in this question:
https://parse.com/questions/what-user-operations-are-supported-in-batch
How should I design an on-login middleware that checks if the recurring subscription has failed ? I know that Stripe fires events when things happen, and that the best practice is webhooks. The problem is, I can't use webhooks in the current implementation, so I have to check when the user logs in.
The Right Answer:
As you're already aware, webhooks.
I'm not sure what you're doing that webhooks aren't an option in the current implementation: they're just a POST to a publicly-available URL, the same as any end-user request. If you can implement anything else in Node, you can implement webhook support.
Implementing webhooks is not an all-or-nothing proposition; if you only want to track delinquent payments, you only have to implement processing for one webhook event.
The This Has To Work Right Now, Customer Experience Be Damned Answer:
A retrieved Stripe Customer object contains a delinquent field. This field will be set to true if the latest invoice charge has failed.
N.B. This call may take several seconds—sometimes into the double digits—to complete, during which time your site will appear to have ceased functioning to your users. If you have a large userbase or short login sessions, you may also exceed your Stripe API rate limit.
I actually wrote the Stripe support team an email complaining about this issue (the need to loop through every invoice or customer if you're trying to pull out delinquent entries) and it appears that you can actually do this without webhooks or wasteful loops... it's just that the filtering functionality is undocumented. The current documentation shows that you can only modify queries of customers or invoices by count, created (date), and offset... but if you pass in other parameters the Stripe API will actually try to understand the query, so the cURL request:
https://api.stripe.com/v1/invoices?closed=false&count=100&offset=0
will look for only open invoices.... you can also pass a delinquent=true parameter in when looking for delinquent customers. I've only tested this in PHP, so returning delinquent customers looks like this:
Stripe_Customer::all(array(
"delinquent" => true
));
But I believe this should work in Node.js:
stripe.customers.list(
{delinquent:true},
function(err, customers) {
// asynchronously called
});
The big caveat here is that because this filtering is undocumented it could be changed without notice... but given how obvious the approach is, I'd guess that it's pretty safe.