I am trying to get a list of popular repos and users on GitHub.
Their API has an example to find users given some criteria that must be sent under the q query param, this is a required parameter but I am not sure how to send it as 'empty'
The query should list users and sort by followers, I am close but I am not sure what to send in q
`https://api.github.com/search/users?q=${WHAT_WHOULD_GO_HERE}&sort=followers&order=desc`
Just for reference, I was also trying to get popular repos and this is possible with the following query and it works just fine:
curl https://api.github.com/search/repositories\?q\=stars:\>1+language:javascript\&sort\=stars\&order\=desc\&type\=Repositories
You can run a query by specifying the follower limit, repository language, and page on the Github API. If you can configure the queries correctly, you will get what you want.
Sample query
`https://api.github.com/search/users?q=repos:followers:<1000&language:javascript&page=1&per_page=100`
For example, I can fetch all users with more than 2000 followers. This is also getting a kind of popular users.
`https://api.github.com/search/users?q=repos:followers:%3E2000&language:javascript&page=1&per_page=100`
Response
{
"total_count": 321,
"incomplete_results": false,
"items": [
{
"login": "vim-scripts",
"id": 443562,
"node_id": "MDQ6VXNlcjQ0MzU2Mg==",
"avatar_url": "https://avatars0.githubusercontent.com/u/443562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vim-scripts",
...
}
After fiddling around I got the answer:
curl https://api.github.com/search/users\?q\=followers:\>1000\&page\=1\&per_page\=10\&sort\=followers\&order\=desc
The query is based on Github's own popular list which has some clues in its own URL, the query above returns the exact same result
https://github.com/search?o=desc&q=followers%3A%3E%3D1000&ref=searchresults&s=followers&type=Users
The q query param needs only this:
followers: >1000,
Plus some sorting as described in the question:
sort: by the followers count,
order: descendent
Related
I am working on a spring boot app where I am using Postgres for my data storage,I am using pagination to structure my data.
I recieve data like this:
{
"messages": {
"dtoList": [
{
"acknowledge_msg": "null",
"status": "QUEUED",
"msg_id": 2021082012204616000,
},
{
"acknowledge_msg": "null",
"status": "QUEUED",
"msg_id": 2021082012204575500,
},
],
"totalRecords": 4,
"pageSize": 2,
"pageNumber": 1,
"numPages": 2
}},
Now in my react page,when I will navigate to pages,I will simple do this api call with page size and page number and It will give me a response.
Now I want to apply filters but filters need to filter from all the records not from respective pages.
How can I achieve this?
What you want to achieve, regards only the backend, not the reactjs application.
You should send to your spring-boot application the query you would like to perform, then apply it into your postregsql query. It would update also pagination, since less results will be presented.
Maybe show here your frontend code, instead of the JSON data, and it would be easier to help you applying the remote filters.
As it was said before, the only proper way to achieve this is to do that on backend.
You can, of course, read all of the pages and filter them in frontend, but this is an awful solution
I'm trying to figure out a way to grab the top 50,000 most subscribed youtube channels using javascript. These only need to be grabbed once and will be stored in a file to be used for an autocomplete input in a webpage.
I've gotten pretty close to getting the first top 50 by using search:list (/youtube/v3/search) by searching with parameters maxResults=50, order=viewCount, part=snippet, type=channel, fields=nextPageToken,items(snippet(channelId,title))
Returning:
{
"nextPageToken": "CDIQAA",
"items": [{
"snippet": {
"channelId": "UC-9-kyTW8ZkZNDHQJ6FgpwQ",
"title": "Music"
}
},{
"snippet": {
"channelId": "UC-lHJZR3Gqxm24_Vd_AJ5Yw",
"title": "PewDiePie"
}
},{
"snippet": {
"channelId": "UCVPYbobPRzz0SjinWekjUBw",
"title": "Анатолий Шарий"
}
},{
"snippet": {
"channelId": "UCam8T03EOFBsNdR0thrFHdQ",
"title": "VEGETTA777"
}
},...
Then all I'd have to do is fetch that 1000 more times using the nextPageToken to get a list of the top 50,000.
Unfortunately, sorting by relevance, rating, viewCount, or nothing is not yielding the 50 most subscribed channels, and there doesn't seem to be any sort of way to order them by subscriber count according to the documentation; so it seems like i am stuck.
Just before you writing your 50 results in file (or database), you can make one more API call, using channelId field from your result, and merge all of them with comma delimited and make another API call Channels: list.
On that page for example you can use following parameters:
(these are IDs from your example above)
part=statistics
id=UC-9-kyTW8ZkZNDHQJ6FgpwQ,UC-lHJZR3Gqxm24_Vd_AJ5Yw,UCVPYbobPRzz0SjinWekjUBw,UCam8T03EOFBsNdR0thrFHdQ`
And result will look something like this:
{
"kind": "youtube#channel",
"etag": "\"m2yskBQFythfE4irbTIeOgYYfBU/MG6zgnd09mqb3nAdyRnPDgFwfkE\"",
"id": "UC-lHJZR3Gqxm24_Vd_AJ5Yw",
"statistics": {
"viewCount": "15194203723",
"commentCount": "289181",
"subscriberCount": "54913094",
"hiddenSubscriberCount": false,
"videoCount": "3175"
}
}
And you can take subscriberCount from result for each channel.
I know, this is not the way to sort your 50 results while writing into the file,
but with this you can sort later your results by "subscriber count" while fetching from file for your autocomplete input.
I didn't find any other way to sort results by subscriber count, so maybe this can be helpful.
The idea to do is to run a server side script, that makes RESTful api calls in a loop, and writes the results to .JSON file, to save results. For that you can create PHP script, that makes REST API call to google, and fetch first 50 results, and then use file write operations to write your results. Run that PHP script as corn job to update results at regular intervals. Executing corn job at every specific time interval you set keeps results fresh.
Hit CURL command with loop for next, to fetches 50 results every time and create temp file with all the results saved in .JSON file. Once your results are fetched, replace your old JSON file with newly created temporary file. This will generate fresh JSON file are regular, with new results if any changes are made to data.
However, the idea to use temporary file is to avoid script avoid wait/slow of AJAX down due to consistent read and write operations on same file. Once temporary file is written, simply use move command to replace the actual file.
Make sure, you use cache control headers in AJAX results to keep its freshness of data.
Trying to query against my db to get all docs with all info. The db.list functionality gets the overview of the docs but does not return all the data for the docs. Currently have to get the high level data then loop through the rows and query each individual doc. There must be a better way...
Is there a way to get all docs with the full set of info for each doc?
getting:
{
"id": "0014ee0d-7551-4639-85ef-778f74365d05",
"key": "0014ee0d-7551-4639-85ef-778f74365d05",
"value": {
"rev": "59-4f01f89e12c488ba5b8aba4643982c45"
}
}
want:
{
"_id": "14fb92ad75b8694c05b98d89de6e9b2d",
"_rev": "1-6067c00b37a18ad8bab6744d258e6439",
"offeringId": "ae0146696d1d3a90fe400cc55a97a60e",
"timestamp": 1464165870848,
"srcUrl": "",
"score": 9,
...
}
The repository you linked to for nano looks like an outdated mirror. The official repository includes documentation for db.list which also includes a params object. I would double-check your version of nano, but I would guess you already have a more recent version.
You can simply add { include_docs: true } as the first argument to db.list and alongside id, key and value, you'll get a doc property that has the entire document.
I'm accessing the Facebook Graph API for posts and am trying to figure out the pagination handling. I understand the use of paging.next and paging.previous properties of the results but I'd like to know when there are actually previous results. Particularly, when I make the first 'posts' call, I get back a paging.previous url even though there are no previous values. Upon calling that url I get a response with no results.
For example, calling "168073773388372/posts?limit=2" returns the following:
{
"data": [
{
"story": "Verticalmotion test added a new photo.",
"created_time": "2015-12-02T17:04:56+0000",
"id": "168073773388372_442952469233833"
},
{
"message": "http://www.youtube.com/watch?v=QD2Rdeo8vuE",
"created_time": "2013-12-16T23:19:30+0000",
"id": "168073773388372_184840215045061"
}
],
"paging": {
"previous": "https://graph.facebook.com/v2.6/168073773388372/posts?limit=2&format=json&since=1449075896&access_token=****&__paging_token=enc_AdA69SApv4VoBZB0PPZA7W5EivCYQal8KMFmRNkyhr8ZBk4w0YmFEQUJWV3JZBS70ihyMpbqieQaERhY50enqNCMBuIZATadeopYj8xPvQL7Y8KueaQZDZD&__previous=1",
"next": "https://graph.facebook.com/v2.6/168073773388372/posts?limit=2&format=json&access_token=****&until=1387235970&__paging_token=enc_AdAVMaUlPmpxjBmq5ZClVdNpFp7f9MyMFWjE7ygqsMLW7zvSx3eGHLkfwDxdCx0uO3ooAZCKDmCwMWHZA9RNyxkYUPJyjMtO3kynKm5uF2PhoPZB2gZDZD"
}
}
How can I tell if it's the first set of results?
From tidbits scattered around the documentation and web, it seems like the previous url shouldn't be there.
I don't think it matters because I get the same results in the Graph Explorer but I'm using OpenFB to access the API.
You can set the order to be reverse then get the 1st result
https://developers.facebook.com/docs/graph-api/using-graph-api
Ordering
You can order certain data sets chronologically. For example you may sort a photo's comments in reverse chronological order using the key reverse_chronological:
GET graph.facebook.com
/{photo-id}?
fields=comments.order(reverse_chronological)
order must be one of the following values:
*chronological*
*reverse_chronological*
I know that if I simply try to get a list of my friends, it will only return app friends AKA friends who also have given permissions to Facebook
So, rightfully so, when I do a GET 'me/friends', I only see 5 friends.
However, when I do a GET me/posts/?fields=likes.fields(name), it obviously returns data with a list of friends who have liked my posts:
{
"id": "post1id",
"likes": {
"data": [
{
"name": "name1",
"id": "id1"
},
{
"name": "name2",
"id": "id2"
}....
]
}
It shows all friends, so non-app and app friends.
My question is, if all I want to use is my friends' names and id's from this result, would it be allowed, considering that some of these friends are not app-friends?
If you work the API with the user token, you may receive all likes (including friends who don't use the app, non-friends, friends who use the app, pages) but this is at all no indication for a users friends. Anyways you can utilize ALL data provided by the GraphAPI as long as you don't work around the limitations with something like exploiting the game invite feature.