I'm using the YouTube API to generate a page that loads YouTube videos. The stack that I'm using is HTML, CSS, and AngularJS. I want a button that will generate a random video given a search query. The way that I was planning to do this is to use the pageToken attribute.
I noticed that the token "CAEQAA" always returns the second page of search results of the query. And following that, "CAIQAA" gives the next page of search results after that. So this makes me think that these keys are independent of the search query.
However, this might be specific to my search options (one video per page of search results, safe search = strict, etc) even if it is independent of the search query. Is there a way to retrieve all the page tokens possible in a list or some form? This way, I can select a random token from this list to pick a random page of search results and thus a random video.
If I am misunderstanding how this works, please let me know as I am new to using this sort of API. Any help is appreciated. Thanks
I wrote an algorithm that can generate a pageToken for any given number in the range [0, 100000] ( you can install with npm install youtube-page-token).
With the package you can
1) fetch the first page of results
2) get a total count
3) get a random number in that total range
4) generate a token for that number
5) plug back into YouTube API
Related
After gone thru the documentation and other solutions proposed on SO, it seems like statistics can only works with id but not snippet?
My app is letting user to search keyword and return a list, and for each and every videos, showing number of likes, something like below
My sample request is as below:
https://www.googleapis.com/youtube/v3/search?part=snippet&maxResults=50&type=video&q=rihanna&key={YOUR_API_KEY}
Is it really how it should be? By querying a list of 50 or more videos at once, then firing 50 individual call just to get likes? It seems very bad and is there any chance I can return statistics along with videos?
P/S: Now I truly see the value of Graphql
The official Youtube API for search.list states that you can only pass the "snippet" parameter.
part string The part parameter specifies a comma-separated list of
one or more search resource properties that the API response will
include. Set the parameter value to snippet.
The next thing to do after retrieving the videos with their corresponding IDs is to use that in videos.list which can accept "statistics" parameter.
Usage
The id parameter value is a comma-separated list of YouTube video IDs. You might issue a request like this to retrieve additional information about the items in a playlist or the results of a search query.
In my react based single page application, my page is divided in two panes.
Left Pane: Filter Panel.
Right Pane: Grid (table containing data that passes through applied filters)
In summary, I have an application that looks very similar to amazon.com. By default, when user hits an application's root endpoint (/) in the browser, I fetch last 7 days of data from the server and show it inside the grid.
Filter panel has couple of filters (e.g. time filter to fetch data that falls inside specified time interval, Ids to search data with specific id etc.) and a search button attached in the header of filter panel. Hitting search button makes a post call to a server by giving selected filters inside post form body, server returns back data that matches filters passed and my frontend application displays this data returned back from the server inside grid.
Now, when someone hits search button in the filter panel I want to reflect selected filters in the query parameter of the URL, because it will help me to share these URLs with other users of my website, so that they can see filters I applied and see data inside the grid matching these filters only.
Problem here is, if on search button click, I use http get with query parameters, I will endup breaking application because of limit imposed on URL length by different browsers.
Please suggest me correct solution to create such URLs that will help me to set the selected filters in the filter panel without causing any side effect in my application.
Possible solution: Considering the fact that we cannot directly add plain strings in query parameter because of URL length limitation from different browsers (Note: Specification does not limit the length of an HTTP Get request but different browsers implement their own limitations), we can use something like message digest or hash (convert input of arbitrary length into an output of fixed length) and save it in DB for server to understand the request and serve content back. This is just a thought, I am not sure whether this is an ideal solution to this problem.
Behavior of other heavily used websites:
amazon.com, newegg.com -> uses hashed urls.
kayak.com -> since they have very well defined keywords, they use
short forms like IN for INDIA, BLR for Bangalore etc. and combine
this with negation logic to further optimize maximum url length. Not
checked but this will ideally break after large selection of filters.
flipkart.com -> appends strings directly to query parameters and breaks
after limit is breached. verified this.
In response to #cauchy's answer, we need to make a distinction between hashing and encryption.
Hashing
Hashes are by necessity irreversible. In order to map the hash to the specific filter combination, you would either need to
hash each permutation of filters on the server for every request to try matching the requested hash (computationally intensive) or
store a map of hash to filter combination on the server (memory intensive).
For the vast majority of cases, option 1 is going to be too slow. Depending on the number of filters and options, option B may require a sizable map, but it's still your best option.
Encryption
In this scheme, the server would send its public key to the client, then the client could use that to encrypt its filter options. The server would then decrypt the encrypted data with its private key. This is good, but your encrypted data will not be fixed length. So, as more options are selected, you run into the same problem of indeterminate parameter length.
Thus, in order to ensure your URL is short for any number of filters and options, you will need to maintain a mapping of hash->selection on the server.
How should we handle permanent vs temporary links?
You mentioned in your comment above
If we use some persistent store to save the mapping between this hash to actual filters, we would ideally want to segregate long-lived "permalinks" from short-lived ephemeral URLs, and use that understanding to efficiently expire the short-lived hashes.
You likely have a service on the server that handles all of the filters that you support in your application. The trick here is letting that service also manage the hashmap. As more filters and options are added/removed, the service will need to re-hash each permutation of filter selections.
If you need strong support for permalinks, then whenever you remove filters or options, you'll want to maintain the "expired" hashes and change their mapping to point to a reasonable alternative hash.
When do we update hashes in our DB?
There are lots of options, but I would generally prefer build time. If you're using a CI solution like Jenkins, Travis, AWS CodePipeline, etc., then you can add a build step to update your DB. Basically, you're going to...
Keep a persistent record of all the existing supported filters.
On build, check to see if there are any new filters. If so...
Add those filters to the record from step 1.
Hash all new filter permutations (just those that include your new filters) and store those in the hash DB
Check to see if any filters have been removed. If so...
Remove those filters from the record from step 1.
Find all the hashes for permutations that include those filters and either...
remove those hashes from the DB (weak permalinks), or
Point that hash to a reasonable alternative hash in the DB (strong permalinks)
Lets analyse your problem and the solution possible.
Problem : You want a URL which has information about the filter applied so that when you share that URL user doesn't land on arbitrary page.
Solutions:
1) Append filter applied with URL. To achieve this you will need to shorten the key of type of filter and the value of filter so that Length of URL don't exceed much for each filter.
Drawback: This is not most reliable solution as the number of filter increase URL length has to increase no other option.
2) Append a unique key of filter applied(hash) with URL. To achieve this you will need to do some changes on server and client both. On client side you will need a encoding algorithm which convert filter applied to unique hash. On server side you will need decoding algorithm which convert unique hash to filter applied. SO now client whenever a URL like this is hit you can make a POST api call which take this hash give you the array of filter applied or on client side only put the logic to convert this hash.
Do all this in componentWillMount to avoid any side effect.
I think 2nd solution is scalable and efficient in almost all cases.
I am using the food2fork target="_blank" API to load search results onto a page. However, I run into a problem when I try to do pagination. I can only get 30 results at a time, and I don't know how to find out the total number of search results possible either. Does anyone know how I can achieve pagination for this or if it's even possible?
I built this with angular + node, hosted on heroku, if this makes a difference.
(Right now I've got it limited so that users can search up to three pages of their desired search, but it's hardcoded into the site so it's problematic for searches that give more or less than exactly 3 pages worth of results. I could have only 'prev' and 'next' buttons, but I feel that's also limiting.)
As said in the doc:
Pages (Search Only)
Any request will return a maximum of 30 results. To get the next set of results send the same request again but with page = 2
The default if omitted is page = 1
Il you want to fetch the results ranging from 31 to 60, you need to pass page=2 in the request. It looks like the API doesn't provide the total number of results.
I don't subscribe to #Arashsoft proposal. It actually defeats the purpose of pagination which is not to load the full resultset. What would be the performances if you have thousands of recipes ?
But with this simple API, you could implements infinite scrolling for instance.
If you cannot get more than 30 results form the API, I suggest to call the API in a loop until you get all the data (30, 60, 90, ...). Then you can paginate it easily for your end user.
I'm using javascript and php.
Instagram's API will only return 20 results at a time. I can get a user's feed OR tags from all users but not a combination of both.
For example:
If I do a search for all the tags with "selfie", but only want results from the users #ladygaga and #justinbieber, Instagram will only return 20 results from everybody with #selfie. Since the tag "selfie" is so widely uses, it's extremely unlikely that any of those results will have #ladygaga or #justinbieber in them.
In which way can I use pagination or any other method to filter search results by tag and user.
We are implementing a Google Site Search for a client, and need access to all of the results for custom result output.
Currently only 10 results are returned at a time, is there a way to retrieve more than 10, preferably the entire result set.
Are you scraping the web site? If so, turn off Google Instant because that option limits you to 10 search results. In your search settings, you will be able to set the number of returned results. Obviously, you can't return the entire result set, though.