How to implement search in AngularJS - javascript

I have frontend AngularJS application that connects to backend API.
I'd like to implement search engine in my SPA so that I can specify
offset - the number of elements displayed on each page
page - number of current page
count - total elements
I also want to sort the results by specified attribute and filter them out as well.
What are the patterns to implement such search? What should be done on backend and what on frontend? How should API look like?
I have a lot of issues with that - what if user requested for 5th page with offset 10, and then change offset to 20, which page should be displayed then? When to fetch additional data? What if I set offset to 15 and then filter out some results, what will give me only 5 elements? Should I fetch missing by current filter?
Can you provide some behaviors of that mechanisms?

Related

How to paginate data with cloud firestore and javascript? [duplicate]

We use ndb datastore in our current python 2.7 standard environment. We migrating this application to python 3.7 standard environment with firestore (native mode).
We use pagination on ndb datastore and construct our query using fetch.
query_results , next_curs, more_flag = query_structure.fetch_page(10)
The next_curs and more_flag are very useful to indicate if there is more data to be fetched after the current query (to fetch 10 elements). We use this to flag the front end for "Next Page" / "Previous Page".
We can't find an equivalent of this in Firestore. Can someone help how to achieve this?
There is no direct equivalent in Firestore pagination. What you can do instead is fetch one more document than the N documents that the page requires, then use the presence of the N+1 document to determine if there is "more". You would omit the N+1 document from the displayed page, then start the next page at that N+1 document.
I build a custom firestore API not long ago to fetch records with pagination. You can take a look at the repository. This is the story of the learning cycle I went through:
My first attempt was to use limit and offset, this seemed to work like a charm, but then I walked into the issue that it ended up being very costly to fetch like 200.000 records. Because when using offset, google charges you also for the reads on all the records before that. The Google Firestore Pricing Page clearly states this:
There are no additional costs for using cursors, page tokens, and
limits. In fact, these features can help you save money by reading
only the documents that you actually need.
However, when you send a query that includes an offset, you are
charged a read for each skipped document. For example, if your query
uses an offset of 10, and the query returns 1 document, you are
charged for 11 reads. Because of this additional cost, you should use
cursors instead of offsets whenever possible.
My second attempt was using a cursor to minimize those reads. I ended up fetching N+1 documents and place the cursor like so:
collection = 'my-collection'
cursor = 'we3adoipjcjweoijfec93r04' # N+1th doc id
q = db.collection(collection)
snapshot = db.collection(collection).document(cursor).get()
q = q.start_at(snapshot) # Place cursor at this document
docs = q.stream()
Google wrote a whole page on pagination in Firestore. Some useful query methods when implementing pagination:
limit() limits the query to a fixed set of documents.
start_at() includes the cursor document.
start_after() starts right after the cursor document.
order_by() ensures all documents are ordered by a specific field.

How to do pagination for API results when my results are limited?

I am using the food2fork target="_blank" API to load search results onto a page. However, I run into a problem when I try to do pagination. I can only get 30 results at a time, and I don't know how to find out the total number of search results possible either. Does anyone know how I can achieve pagination for this or if it's even possible?
I built this with angular + node, hosted on heroku, if this makes a difference.
(Right now I've got it limited so that users can search up to three pages of their desired search, but it's hardcoded into the site so it's problematic for searches that give more or less than exactly 3 pages worth of results. I could have only 'prev' and 'next' buttons, but I feel that's also limiting.)
As said in the doc:
Pages (Search Only)
Any request will return a maximum of 30 results. To get the next set of results send the same request again but with page = 2
The default if omitted is page = 1
Il you want to fetch the results ranging from 31 to 60, you need to pass page=2 in the request. It looks like the API doesn't provide the total number of results.
I don't subscribe to #Arashsoft proposal. It actually defeats the purpose of pagination which is not to load the full resultset. What would be the performances if you have thousands of recipes ?
But with this simple API, you could implements infinite scrolling for instance.
If you cannot get more than 30 results form the API, I suggest to call the API in a loop until you get all the data (30, 60, 90, ...). Then you can paginate it easily for your end user.

Homepage Ajax/Google Maps - Server overhead

Adding a Google Map plugin to our homepage, which updates a single marker dynamically whenever there is a new product search on our site (which we read from our database). So, "has there been a new search via our site? If yes, reposition the marker based on the new search's coords".
Currently every "n" seconds (haven't settled on a seconds value yet) an Ajax call is made (using SetInterval) to determine if there has been a new search, and if there has it returns a small JSON response. The script run via the Ajax call is a PHP script, which queries the database for the last row in our searches table (order by desc limit 1).
So, my question is (not being a sysadmin), could this setup put an undesirable strain on our server? Should i incorporate a timeout session, or something, which turns off the Ajax call after 100 goes, or after 15 mins (i mean, who sits for 15 mins looking at markers dynamically generate on a Google map?!).
Our homepage only receives roughly 200 visits a day.
As you have you given the statistics that your website gets 200 visits per day and that your server is a spitting JSON that you have to extract and display it on the UI, It is a normal practice to have a set up like this one. You can rather ping the server data using AJAX in every 5 sec to get more precise data but it wont cause any performance issue at this level.
Please be sure that you dont have servers that are separated geographically else you have to use some other synchronization mechanism to track users location based on there search.
For AJAX JQuery implementation details please see the following page.
For project implementation as a tutorial please visit this tutorial.

YouTube API v3 pageToken to generate a random video

I'm using the YouTube API to generate a page that loads YouTube videos. The stack that I'm using is HTML, CSS, and AngularJS. I want a button that will generate a random video given a search query. The way that I was planning to do this is to use the pageToken attribute.
I noticed that the token "CAEQAA" always returns the second page of search results of the query. And following that, "CAIQAA" gives the next page of search results after that. So this makes me think that these keys are independent of the search query.
However, this might be specific to my search options (one video per page of search results, safe search = strict, etc) even if it is independent of the search query. Is there a way to retrieve all the page tokens possible in a list or some form? This way, I can select a random token from this list to pick a random page of search results and thus a random video.
If I am misunderstanding how this works, please let me know as I am new to using this sort of API. Any help is appreciated. Thanks
I wrote an algorithm that can generate a pageToken for any given number in the range [0, 100000] ( you can install with npm install youtube-page-token).
With the package you can
1) fetch the first page of results
2) get a total count
3) get a random number in that total range
4) generate a token for that number
5) plug back into YouTube API

Google Site Search - 10 Result Limitation

We are implementing a Google Site Search for a client, and need access to all of the results for custom result output.
Currently only 10 results are returned at a time, is there a way to retrieve more than 10, preferably the entire result set.
Are you scraping the web site? If so, turn off Google Instant because that option limits you to 10 search results. In your search settings, you will be able to set the number of returned results. Obviously, you can't return the entire result set, though.

Categories

Resources