Where can I view the secondary rate limit of GitHub REST api? - javascript

I'm trying to create something with GitHub's REST api using Octokit, but I'm having trouble dealing with their secondary rate limit, I wonder where can I view this secondary rate limit, or know exactly how they work? The documentation seems to be very vague about the inner workings of this said secondary rate limit.
The rate limits can be viewed using the endpoint below, but it does not include the secondary rate limit.
await octokit.request('GET /rate_limit', {})
Also the documentation only provides best practices to avoid this secondary rate limit, but even this does not help.
Specifically I'm constantly using their follow a user endpoint every 2 seconds and doing a 5 minute sleep after 24 requests, but at some point I'm still hitting this secondary rate limit.
await octokit.request('PUT /user/following/{username}', {
username: 'USERNAME'
})
At this point, the only solution that I can think of is to slow down even further with the requests, but that is not optimal.
I wonder If GitHub has a way to view this secondary rate limit so I can deal with it programmatically, or on a much clearer way?

You can see secondary rate limits configured on a on-premise GHE (GitHub for Enterprise) instance.
Type limits for:
Total Requests,
CPU Limit, and
CPU Limit for Searching,
or accept the pre-filled default limits.
But even that instance does not have an exposed API to configure those elements: web page interface only.

Related

How often can I rename discord channel's name?

This is not a post about HOW to change channel's name (I know it).
I have an international server using several bots. And we all depend on UTC time (to coordinate through the world). So there was borned a solution to make a time-bot which will show current UTC-time in the dedicated channel nobody can visit. And yes, precision is necessary, even seconds.
I created a voice channel with permissions not to join for #everyone. Everything worked fine, it updated every 1000 ms. Then (after several months of good work) something was broke, it started updating incorrect. I've increased update interval up to 5000 ms and it have started to work fine... until yesterday.
Now it doesn't work anymore. Even if I increase interval much more. It works sometimes I don't really know what the interval is, it's huge and unpredictable.. the time-bot is broken for now and cannot be used anymore in that case.
Is there any restrictions for updating channel name? I can't find any information about this in available documentations.
Client.setInterval(() => {
const { h, m, s } = getTime();
channel.edit({ name: `${getClockEmoji({ h, m })} UTC: ${h}-${m}-${s}` }).catch((err) => console.log(err));
}, updateInterval);
Providing data is correct, 'cause I send it to console and it updates as good as I need in interval I set. But channel name not updates that often..
Does discord filter too often update requests?
discord.js version is v.12.2.0
Discord had set the rate limit for things like channelrename to 2 requests per 10 minutes.
"Normal" requests like sendmessage is limited to 10,000 per 10 minutes.
This seems to likely be an issue directly related to rate limiting:
https://discord.com/developers/docs/topics/rate-limits
IP addresses that make too many invalid HTTP requests are automatically and temporarily restricted from accessing the Discord API. Currently, this limit is 10,000 per 10 minutes. An invalid request is one that results in 401, 403, or 429 statuses.
For every API request made, we return optional HTTP response headers containing the rate limit encountered during your request.
You should probably decrease the interval by a considerable amount to reduce the risk of the IP being restricted.

How can I or my app know that Chrome is discarding indexed db items?

I have an app which has recently started to randomly lose indexeddb items. By "lose" I mean they are confirmed as having been saved, but days later, are no longer present.
My hypothesis is that Chrome is discarding indexeddb items because the disk is full.
My question is specifically, are there any events that my app can listen for, or any Chrome log entries that I can refer to in order to confirm this is the case. NB. I'm not looking to fix the problem, I am only looking for ways in which I can detect it.
Each application can query how much data is stored or how much more
space is available for the app by calling queryUsageAndQuota() method
of Quota API.See
You can use periodic BackgroundSync
to estimate the used and free space allocated to temporary usage using:
// index.html
navigator.serviceWorker.ready.then(registration => {
registration.periodicSync.register('estimate-storage', {
// Minimum interval at which the sync may fire.
minInterval: 24 * 60 * 60 * 1000,
});
});
// service_worker.js
self.addEventListener('periodicsync', event => {
if (event.tag == 'estimate-storage') {
event.waitUntil(estimateAndNotify());
}
});
To estimate use the method: navigator.storage.estimate()//returns a Promise which resolves with {usage, quota} values in bytes.
When TEMPORARY storage quota is exceeded, all the data (incl.
AppCache, IndexedDB, WebSQL, File System API) stored for oldest used
origin gets deleted.
You can switch to unlimitedStorage or persistent storage.
Not sure how will it work in real life
This feature was initially still somewhat experimental.
but here https://developers.google.com/web/updates/2016/06/persistent-storage you can find that
This is still under development [...] the goal is to make users are aware of “persistent” data before clearing it [...] you can presume that “persistent” means your data won’t get cleared without the user being explicitly informed and directly in control of that deletion.
Which can answer you clarification in comments that you are looking for a way
how can a user or my app know for sure that this is happening
So, don't know about your app, but user maybe can get a notification. At least the page is of 2016 year. Something must have been done by now.
You may refer to Browser storage limits and eviction criteria article on MDN.
My hypothesis is that Chrome is discarding indexeddb items because the disk is full
According to this article your hypothesis seems to be true.
Now one of the ways to confirm this could be to save unique ids of items that are confirmed in a persistent storage - (this will have very small size compared to your actual data) and periodically compare whether all items are there in your indexeddb.
You might also want to refer to best practices.
and finally this comments
I think you need IndexedDB Observer. These links may help you if I got what you meant. Link-01 - Link-02
might be helpful.

User Rate Limit Exceeded with google drive API

I'm building a web application on top of the Google Drive API. Basically, the web application displays photos and videos. The media is stored in a Google Drive folder: Once authenticated, the application makes requests to the Google Drive API to get an URL for the media and displays each one. For the moment, I have only 16 images to display. These images are hard-written in the application (for the demo).
I have encountered an issue with my application accessing Google Drive API. Indeed, after multiple tries, I've got this error for random requests
User Rate Limit Exceeded. Rate of requests for user exceed configured project quota.
You may consider re-evaluating expected per-user traffic to the API and
adjust project quota limits accordingly.
You may monitor aggregate quota usage and adjust limits in the API Console:
https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXX"
So I looked the API console and saw nothing special, I don't exceed the rate limit according to me. Maybe I use wrong the google API, I don't know in fact...
I followed the Google Drive API documentation to check whether I did something wrong. For each API request, the request contains the access token, so it should work correctly !
A demonstration of the app is available: https://poc-drive-api.firebaseapp.com
The source code is also available: https://github.com/Mcdostone/poc-google-drive-api (file App.js)
403: User Rate Limit Exceeded is flood protection. A user can only make so many requests at a time. unfortunately user rate limit is not shown in the graph you are looking at. That graph is actually really bad at showing what is truly happening. Google tests in the background and kicks out the error if you are exceeding your limit. They are not required to actually show us that in the graph
403: User Rate Limit Exceeded
The per-user limit has been reached. This may be the limit from the Developer Console or a limit from the Drive backend.
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "userRateLimitExceeded",
"message": "User Rate Limit Exceeded"
}
],
"code": 403,
"message": "User Rate Limit Exceeded"
}
}
Suggested actions:
Raise the per-user quota in the Developer Console project.
If one user is making a lot of requests on behalf of many users of a G Suite domain, consider a Service Account with authority delegation (setting the quotaUser parameter).
Use exponential backoff.
IMO the main thing to do when you begin to encounter this error message is to implement exponential backoff this way your application will be able to slow down and make the request again.
In my case, I was recursing through Google Drive folders in parallel and getting this error. I solved the problem by implementing client-side rate limiting using the Bottleneck library with a 110ms delay between requests:
const limiter = new Bottleneck({
// Google allows 1000 requests per 100 seconds per user,
// which is 100ms per request on average. Adding a delay
// of 100ms still triggers "rate limit exceeded" errors,
// so going with 110ms.
minTime: 110,
});
// Wrap every API request with the rate limiter
await limiter.schedule(() => drive.files.list({
// Params...
}));
I was using the limiter library to enforce the "1000 queries per 100 seconds" limit, but I was still getting 403 errors. I finally stumbled upon this page where it mentions that:
In the API Console, there is a similar quota referred to as Requests per 100 seconds per user. By default, it is set to 100 requests per 100 seconds per user and can be adjusted to a maximum value of 1,000. But the number of requests to the API is restricted to a maximum of 10 requests per second per user.
So I updated the limiter library to only allow 10 requests every second instead of 1,000 every 100 seconds and it worked like a charm.
const RateLimiter = require('limiter').RateLimiter;
const limiter = new RateLimiter(10, 1000);
You can use this zero-dependency library I've created called rate-limited-queue, to limit the execution rate of tasks in a queue.
Limiting 10 requests per second can be achieved like so:
const createQueue = require("rate-limited-queue");
const queue = createQueue(
1000 /* time based sliding window */,
10 /* max concurrent tasks in the sliding window */);
const results = await queue([
() => { /* a request code goes here */ },
() => { /* another request code goes here */ }
// ...
]);
You can download any particular file from google drive to your colab..
use
!gdown https://drive.google.com/uc?id=14ikT5VererdfeOnQtIJREINKSDN
where the value post id, is what you'll get when you click on share link of the file in drive

Google maps javascript DirectionRenderers limit

I am looking to display many routes on one instance of map, each route with different color. For it I created an array of DirectionsRenderer objects and assigning separate DirectionsRenderer for each route.
The problem is GoogleMap component displays only first 10 DirectionsRenderer, I was not able to find any info on this limitation in google developers website. Is there any way to use more than 10 DirectionsRenderers at same time?
Thanks for help in advance.
Just to add to what #geocodezip said:
"I was not able to find any info on this limitation in google developers website."
Here you go:
Usage limits and
policies
-Up to 8 waypoints per request, plus the origin and destination. // 10 overall
-50 requests per second, calculated as the sum of client-side and server-side queries.
-2,500 free requests per day, calculated as the sum of client-side and server-side queries; enable billing to access higher daily quotas,
billed at $0.50 USD / 1000 additional requests, up to 100,000 requests
daily.

Rate limit API queries in node.js

I am attempting to rate limit queries to the Battle.net API up to their limits of 100 calls/second, 36,000/hour.
My current code looks like this:
var async = require ('async');
var bnet = require ('battlenet-api');
async.eachSeries(regions, function(region, callback) {
bnet.wow.realmStatusAsync({origin: region}).spread(/*...snip...*/)
});
What I'd need here is to make sure that no calls to
battlenet-api run up to the limit, and if so, queue them.
I have looked at timequeue.js, limiter and async's own capabilities and none seems to provide what I am looking for.
Does someone know how to achieve this?
A couple possibilities are:
Wait until you get throttled and backoff your requests
Keep track of your limits and used limits inside of your application.
While i think both cases need to be handled, the second one is more valuable, because it will let your application know at any time how many requests have been made for a given timeframe.
You could focus on implementing this on the hour level and letting the backoff/retry catch the cases of going over the 100 req/s timelimit.
To implement this, your application would have to keep track of how many requests are available for the given time period. Before each request the request count has to be checked to see if there are available requests, if there are the API request can be made and if not then no request will be made and your application will throttle itself. Additionally, the limits need to be reset when appropriate.
There are quite a few articles on using redis to accomplish this, a quick google search should uncover them

Categories

Resources