User Rate Limit Exceeded with google drive API - javascript

I'm building a web application on top of the Google Drive API. Basically, the web application displays photos and videos. The media is stored in a Google Drive folder: Once authenticated, the application makes requests to the Google Drive API to get an URL for the media and displays each one. For the moment, I have only 16 images to display. These images are hard-written in the application (for the demo).
I have encountered an issue with my application accessing Google Drive API. Indeed, after multiple tries, I've got this error for random requests
User Rate Limit Exceeded. Rate of requests for user exceed configured project quota.
You may consider re-evaluating expected per-user traffic to the API and
adjust project quota limits accordingly.
You may monitor aggregate quota usage and adjust limits in the API Console:
https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXX"
So I looked the API console and saw nothing special, I don't exceed the rate limit according to me. Maybe I use wrong the google API, I don't know in fact...
I followed the Google Drive API documentation to check whether I did something wrong. For each API request, the request contains the access token, so it should work correctly !
A demonstration of the app is available: https://poc-drive-api.firebaseapp.com
The source code is also available: https://github.com/Mcdostone/poc-google-drive-api (file App.js)

403: User Rate Limit Exceeded is flood protection. A user can only make so many requests at a time. unfortunately user rate limit is not shown in the graph you are looking at. That graph is actually really bad at showing what is truly happening. Google tests in the background and kicks out the error if you are exceeding your limit. They are not required to actually show us that in the graph
403: User Rate Limit Exceeded
The per-user limit has been reached. This may be the limit from the Developer Console or a limit from the Drive backend.
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "userRateLimitExceeded",
"message": "User Rate Limit Exceeded"
}
],
"code": 403,
"message": "User Rate Limit Exceeded"
}
}
Suggested actions:
Raise the per-user quota in the Developer Console project.
If one user is making a lot of requests on behalf of many users of a G Suite domain, consider a Service Account with authority delegation (setting the quotaUser parameter).
Use exponential backoff.
IMO the main thing to do when you begin to encounter this error message is to implement exponential backoff this way your application will be able to slow down and make the request again.

In my case, I was recursing through Google Drive folders in parallel and getting this error. I solved the problem by implementing client-side rate limiting using the Bottleneck library with a 110ms delay between requests:
const limiter = new Bottleneck({
// Google allows 1000 requests per 100 seconds per user,
// which is 100ms per request on average. Adding a delay
// of 100ms still triggers "rate limit exceeded" errors,
// so going with 110ms.
minTime: 110,
});
// Wrap every API request with the rate limiter
await limiter.schedule(() => drive.files.list({
// Params...
}));

I was using the limiter library to enforce the "1000 queries per 100 seconds" limit, but I was still getting 403 errors. I finally stumbled upon this page where it mentions that:
In the API Console, there is a similar quota referred to as Requests per 100 seconds per user. By default, it is set to 100 requests per 100 seconds per user and can be adjusted to a maximum value of 1,000. But the number of requests to the API is restricted to a maximum of 10 requests per second per user.
So I updated the limiter library to only allow 10 requests every second instead of 1,000 every 100 seconds and it worked like a charm.
const RateLimiter = require('limiter').RateLimiter;
const limiter = new RateLimiter(10, 1000);

You can use this zero-dependency library I've created called rate-limited-queue, to limit the execution rate of tasks in a queue.
Limiting 10 requests per second can be achieved like so:
const createQueue = require("rate-limited-queue");
const queue = createQueue(
1000 /* time based sliding window */,
10 /* max concurrent tasks in the sliding window */);
const results = await queue([
() => { /* a request code goes here */ },
() => { /* another request code goes here */ }
// ...
]);

You can download any particular file from google drive to your colab..
use
!gdown https://drive.google.com/uc?id=14ikT5VererdfeOnQtIJREINKSDN
where the value post id, is what you'll get when you click on share link of the file in drive

Related

Where can I view the secondary rate limit of GitHub REST api?

I'm trying to create something with GitHub's REST api using Octokit, but I'm having trouble dealing with their secondary rate limit, I wonder where can I view this secondary rate limit, or know exactly how they work? The documentation seems to be very vague about the inner workings of this said secondary rate limit.
The rate limits can be viewed using the endpoint below, but it does not include the secondary rate limit.
await octokit.request('GET /rate_limit', {})
Also the documentation only provides best practices to avoid this secondary rate limit, but even this does not help.
Specifically I'm constantly using their follow a user endpoint every 2 seconds and doing a 5 minute sleep after 24 requests, but at some point I'm still hitting this secondary rate limit.
await octokit.request('PUT /user/following/{username}', {
username: 'USERNAME'
})
At this point, the only solution that I can think of is to slow down even further with the requests, but that is not optimal.
I wonder If GitHub has a way to view this secondary rate limit so I can deal with it programmatically, or on a much clearer way?
You can see secondary rate limits configured on a on-premise GHE (GitHub for Enterprise) instance.
Type limits for:
Total Requests,
CPU Limit, and
CPU Limit for Searching,
or accept the pre-filled default limits.
But even that instance does not have an exposed API to configure those elements: web page interface only.

How do I get the remaining daily mail quota of google apps scripts service to work consistently?

I did a project on google apps scripts to send automatic emails after a form response was sent. However, when I check the daily quota of remaining emails with the MailApp.getRemainingDailyQuota() method, the quota responses vary with each script execution.
So I created another project just to test the quota, using the MailApp.getRemainingDailyQuota() method and even so, the quota response varied with each execution.
Code used to test:
function testeDeCota() {
let cota;
cota = MailApp.getRemainingDailyQuota();
Logger.log("Cota de emails restantes: " + cota);
cota = MailApp.getRemainingDailyQuota();
Logger.log("Cota de emails restantes: " + cota);
}
I am using a workspace account that has a quota of 1500 emails/day.
This is a screeshot of my executions. Note that there are 3 consecutive executions and without sending any email, even so the quota responses varied. The responses were:
1394;
1399;
1390.
Whenever I try to send an email or get the quota information the number just varies randomly.
printscreen of the script executions
This seems to be intended behavior.
Apart from the older issue Marios mentioned (Gmail SendEmail Quota - Decrementing Bug/issue), this was reported recently in Issue Tracker, and closed as Intended Behaviour:
MailApp.getRemainingDailyQuota() is giving incorrect information
See comments #6 and #8:
This small fluctuation in the value returned by this method is to be expected.
This behaviour is caused by how quotas are internally handled by Apps Script. It's expected behaviour.
Different executions vs Loop:
Interestingly, this discrepancy can only be seen when calling this method in separate invocations. When calling this method in a loop instead, no fluctuation is shown:
for(i=1; i<=n; i++) {
SpreadsheetApp.openById(documentID).getSheetByName(sheetName).appendRow([new Date(), MailApp.getRemainingDailyQuota()]);
Utilities.sleep(1000);
}
File a feature request:
As suggested in the referenced issue, if this is impacting your workflow somehow, you could consider filing a feature request.

How often can I rename discord channel's name?

This is not a post about HOW to change channel's name (I know it).
I have an international server using several bots. And we all depend on UTC time (to coordinate through the world). So there was borned a solution to make a time-bot which will show current UTC-time in the dedicated channel nobody can visit. And yes, precision is necessary, even seconds.
I created a voice channel with permissions not to join for #everyone. Everything worked fine, it updated every 1000 ms. Then (after several months of good work) something was broke, it started updating incorrect. I've increased update interval up to 5000 ms and it have started to work fine... until yesterday.
Now it doesn't work anymore. Even if I increase interval much more. It works sometimes I don't really know what the interval is, it's huge and unpredictable.. the time-bot is broken for now and cannot be used anymore in that case.
Is there any restrictions for updating channel name? I can't find any information about this in available documentations.
Client.setInterval(() => {
const { h, m, s } = getTime();
channel.edit({ name: `${getClockEmoji({ h, m })} UTC: ${h}-${m}-${s}` }).catch((err) => console.log(err));
}, updateInterval);
Providing data is correct, 'cause I send it to console and it updates as good as I need in interval I set. But channel name not updates that often..
Does discord filter too often update requests?
discord.js version is v.12.2.0
Discord had set the rate limit for things like channelrename to 2 requests per 10 minutes.
"Normal" requests like sendmessage is limited to 10,000 per 10 minutes.
This seems to likely be an issue directly related to rate limiting:
https://discord.com/developers/docs/topics/rate-limits
IP addresses that make too many invalid HTTP requests are automatically and temporarily restricted from accessing the Discord API. Currently, this limit is 10,000 per 10 minutes. An invalid request is one that results in 401, 403, or 429 statuses.
For every API request made, we return optional HTTP response headers containing the rate limit encountered during your request.
You should probably decrease the interval by a considerable amount to reduce the risk of the IP being restricted.

Running tasks in parallel with reliability in cloud functions

I'm streaming and processing tweets in Firebase Cloud Functions using the Twitter API.
In my stream, I am tracking various keywords and users of Twitter, hence the influx of tweets is very high and a new tweet is delivered even before I have processed the previous tweet, which leads to lapses as the new tweet sometimes does not get processed.
This is how my stream looks:
...
const stream = twitter.stream('statuses/filter', {track: [various, keywords, ..., ...], follow: [userId1, userId2, userId3, userId3, ..., ...]});
stream.on('tweet', (tweet) => {
processTweet(tweet); //This takes time because there are multiple network requests involved and also sometimes recursively running functions depending on the tweets properties.
})
...
processTweet(tweet) essentially is compiling threads from twitter, which takes time depending upon the length of the thread. Sometimes a few seconds also. I have optimised processTweet(tweet) as much as possible to compile the threads reliably.
I want to run processTweet(tweet) parallelly and queue the tweets that are coming in at the time of processing so that it runs reliably as the twitter docs specify.
Ensure that your client is reading the stream fast enough. Typically you should not do any real processing work as you read the stream. Read the stream and hand the activity to another thread/process/data store to do your processing asynchronously.
Help would be very much appreciated.
This twitter streaming API will not work with Cloud Functions.
Cloud Functions code can only be invoked in response to incoming events, and the code may only run for up to 9 minutes max (default 60 seconds). After that, the function code is forced to shut down. With Cloud Functions, there is no way to continually process some stream of data coming from an API.
In order to use this API, you will need to use some other compute product that allows you to run code indefinitely on a dedicated server instance, such as App Engine or Compute Engine.

Rate limit API queries in node.js

I am attempting to rate limit queries to the Battle.net API up to their limits of 100 calls/second, 36,000/hour.
My current code looks like this:
var async = require ('async');
var bnet = require ('battlenet-api');
async.eachSeries(regions, function(region, callback) {
bnet.wow.realmStatusAsync({origin: region}).spread(/*...snip...*/)
});
What I'd need here is to make sure that no calls to
battlenet-api run up to the limit, and if so, queue them.
I have looked at timequeue.js, limiter and async's own capabilities and none seems to provide what I am looking for.
Does someone know how to achieve this?
A couple possibilities are:
Wait until you get throttled and backoff your requests
Keep track of your limits and used limits inside of your application.
While i think both cases need to be handled, the second one is more valuable, because it will let your application know at any time how many requests have been made for a given timeframe.
You could focus on implementing this on the hour level and letting the backoff/retry catch the cases of going over the 100 req/s timelimit.
To implement this, your application would have to keep track of how many requests are available for the given time period. Before each request the request count has to be checked to see if there are available requests, if there are the API request can be made and if not then no request will be made and your application will throttle itself. Additionally, the limits need to be reset when appropriate.
There are quite a few articles on using redis to accomplish this, a quick google search should uncover them

Categories

Resources