How to improve performance of Jquery autocomplete - javascript

I was planning to use jquery autocomplete for a site and have implemented a test version. Im now using an ajax call to retrieve a new list of strings for every character input. The problem is that it gets rather slow, 1.5s before the new list is populated. What is the best way to make autocomplete fast? Im using cakephp and just doing a find and with a limit of 10 items.

This article - about how flickr does autocomplete is a very good read. I had a few "wow" experiences reading it.
"This widget downloads a list of all
of your contacts, in JavaScript, in
under 200ms (this is true even for
members with 10,000+ contacts). In
order to get this level of
performance, we had to completely
rethink how we send data from the
server to the client."

Try preloading your list object instead of doing the query on the fly.
Also the autocomplete has a 300 ms delay by default.
Perhaps remove the delay
$( ".selector" ).autocomplete({ delay: 0 });

1.5-second intervals are very wide gaps to serve an autocomplete service.
Firstly optimize your query and db
connections. Try keeping your db connection
alive with memory caching.
Use result caching methods if your
service is highly used to ignore re-fetchs.
Use a client cache (a JS list) to keep the old requests on the client. If user types back and erases, it is going to be useful. Results will come from the frontend cache instead of backend point.
Regex filtering on the client side wont be costly, you may give it a chance.

Before doing some optimizations you should first analyze where the bottle-neck is. Try to find out how long each step (input → request → db query → response → display) takes. Maybe the CakePHP implementation has a delay not to send a request for every character entered.

Server side on PHP/SQL is slow.
Don't use PHP/SQL. My autocomplete written on C++, and uses hashtables to lookup. See performance here.
This is Celeron-300 computer, FreeBSD, Apache/FastCGI.
And, you see, runs quick on huge dictionaries. 10,000,000 records isn't a problem.
Also, supports priorities, dynamic translations, and another features.

The real issue for speed in this case I believe is the time it takes to run the query on the database. If there is no way to improve the speed of your query then maybe extending your search to include more items with a some highly ranked results in it you can perform one search every other character, and filter through 20-30 results on the client side.
This may improve the appearance of performance, but at 1.5 seconds, I would first try to improve the query speed.
Other than that, if you can give us some more information I may be able to give you a more specific answer.
Good luck!

Autocomplete itself is not slow, although your implementation certainly could be. The first thing I would check is the value of your delay option (see jQuery docs). Next, I would check your query: you might only be bringing back 10 records but are you doing a huge table scan to get those 10 records? Are you bringing back a ton of records from the database into a collection and then taking 10 items from the collection instead of doing server-side paging on the database? A simple index might help, but you are going to have to do some testing to be sure.

Related

How to ensure data consistency in REST API pagination

I'm building an instant messenger on mobile client that interacts with RESTful API through HTTP requests. The pagination endpoint is quite standard - it has starting location (offset) and number of items in a page (limit). I'm having trouble figuring out how to ensure 100% data consistency with pagination when the database can rapidly change.
For example, with some dozen participants, there could be a dozen new messages in a conversation within a second. I don't think it's far-fetched to guess that some of those messages can alter the database within the time the HTTP request for pagination comes back from the server. Fortunately, since this is a messenger I do not have to consider the possibility of data deletion and consider only the data addition.
Among my research, following two links were quite helpful but didn't provide clear solution:
How to ensure data integrity in paginated REST API?
How to implement robust pagination with a RESTful API when the resultset can change?
The only potential solution I can come up with is using the timestamp of the last object in the previously fetched page. So the HTTP query would have timestamp as a parameter, and the server would return a page of objects created after that timestamp.
Is there any potential problem I'm not seeing, or even better, a much better solution to this issue?
It seems that the method I've thought of has a name - cursor based pagination.
The link below has a great graphical description and explanation, plus an example in php.
http://www.sitepoint.com/paginating-real-time-data-cursor-based-pagination/
There's also a helpful guide from Django Framework that compares two different pagination techniques (LimitOffsetPagination and CursorPagination).
http://www.django-rest-framework.org/api-guide/pagination/
Cursor based pagination requires a unique, unchanging ordering of items. Facebook and Twitter use some generated IDs. As for me, I've decided to simply use timestamp at object creation, as it supports up to milliseconds precision. That should be good enough for now.

"fragmenting" HTTP requests

I have an Angular app pulling data from a REST server. Each item we pull has some "core" data - what's needed to display it's basic representation - and then what I call "secondary" data, comments and other things that the user might want to see and might not.
I'm trying to optimize our request pattern to minimize the overall amount of time the user spends looking at a loading spinner: Pulling all (core/secondary) data at once causes the initial request to return far too slowly, but pulling only the bare essentials until the user asks for something we haven't requested yet also creates unnecessary load time, at least inasmuch as I could've anticipated them wanting to see it and loaded it while they were busy reading the core content.
So, right now I'm doing a "core content" pull first and then initiating a "secondary" pull at the end of the success callback from the first. This is going to be an experimental process, but I'm wondering what (if any) best practices have been established in this situation. (I'm sure a good answer to that is a google away, but in this instance I'm not quite sure what to google - thus the quotation marks in this question's title)
A more concrete question: Am I better off initiating many small HTTP transactions or a few large ones? My instinct is to do many small ones, particularly if I can anticipate a few things the user is likeliest to want to see first and get those loaded as soon as possible. But surely there's an asymptote here? Or am I off-base in this line of thinking entirely?
I use the same approach as you, and it works pretty well for a many-keyed, 10,0000+ collection.
The collection is paginated with ui.bootstrap.pagination, only a maximum of 10 items are displayed at once. It can be searched on title.
So my approach is to retrieve only id and title, for the whole collection, so the search can be used straight away.
Then, as the items displayed on screen are in an array, I place a $watch on that array. The job of the $watch is to go fetch full details of the items in the array (secondary pull), but of course only when the array is changed.
So, in the worst case scenario, you are pulling the full details of only 10 items.
Results are cached for more efficiency. It displays instant results, as the $watch acts as a pre-loader.
Am I better off initiating many small HTTP transactions or a few large ones?
I believe large transactions, for just a few items (the ones which are clickable on the screen) are very efficient.
Regarding the best practice bit: I suppose there are many ways to achieve your goals; however, the technique you are using works extremely well, as it retrieves only what is needed, and only just before it is needed.
Besides, it is simple enough to implement.
Also, like you I would have thought many smaller pulls were surely better than several large ones. However, I was advised to go for a large pull as a comment to this question: Fetching subdocuments with angular $http
To answer you question about which keywords to search for, I suggest:
progressive loading
An alternative could be using websockets and streaming loading: Oboe.js does this quite well:
http://oboejs.com/examples

Using Jquery AutoComplete with dictionary list

I have a dictionary list of about 58040 words and i don't think jquery auto complete can handle that many words as the browser hangs.
The list is
words = ['axxx','bxxx','cxxx', an so on];
$(".CreateAddKeyWords input").autocomplete({ source: words });
Am i doing something wrong
Is there another free tool that i can use
Edit
i am using .net and i have retrieved the data fro the database and can loop through the data server side, but how do you send the data back, if json format how should the format look like?
Is there another free tool that i can use
Yes, instead of hardcoding 58040 words in your HTML or javascript file you could load them from a remote datasource using AJAX. Basically you will have a server side script which when queries with the current user input will prefilter the result and send it to the client to display suggestions.
You should assign a minimum length of user entry before searching (so it isn't querying with 1 or 2 characters).
$(".CreateAddKeyWords input").autocomplete({ source: words, minLength: 3 });
It's possible the browser is hanging because it is trying to search on the very first character which is not very useful. ~58k entries is not a large dataset by most regards, especially when you narrow it by 2-3 character contents requirements.
That's just way too much data to have it load in your webpage. Limit it to 2 letters.
1) set the autocomplete min length to at least 2
2) Create a webpage that returns JSON data - http://mydomain.com/words.php?q={letters}
You can have the filter sort be 'begins with' before 'contains'; or any variation you prefer.
Use that page as your remote data source. With the min length set, autocomplete knows when to query for new data.
I thought this was an interesting problem, and hacked up a backend service that solves auto-completion.
My code is at https://github.com/badgerman/fastcgi/ (look for complete.c), and the quick and dirty javascript proof of concept from that repository is currently at http://enno.kn-bremen.de/prefix.html (no guarantees that it will stay up for very long, since this is running on the Raspberry Pi in my home).

Best Practices for displaying large lists

Are there any best practices for returning large lists of orders to users?
Let me try to outline the problem we are trying to solve. We have a list of customers that have 1-5,000+ orders associated to each. We pull these orders directly from the database and present them to the user is a paginated grid. The view we have is a very simple "select columns from orders" which worked fine when we were first starting but as we are growing, it's causing performance/contention problems. Seems like there are a million and one ways to skin this cat (return only a page worth of data, only return the last 6 months of data, etc.) but like I said before just wondering if there are any resources out there that provide a little more hand holding on how to solve this problem.
We use SQL Server as our transaction database and select the data out in XML format. We then use a mixture of XSLT and Javascript to create our grid. We aren't married to the presentation solution but are married to the database solution.
My experience.
Always set default values in the UI for the user that are reasonable. You don't want them clicking "Retrieve" and getting everything.
Set a limit to the number of records that can be returned.
Only return from the database the records you are going to display.
If forward/backward consistencency is important, store the entire results set from the query in a temp table and return just the page you need to display. When paging up/down retrieve the next set from the temp table.
Make sure your indexs are covering your queries.
Use different queries for different purposes. Think "Open Orders" vs "Closed Orders". These might perfrom much better as different queries instead of one generic query.
Set parameter defualts in the stored procedures. Protect your query from a UI that is not setting reasonable limits.
I wish we did all these things.
I'd recommend doing some profiling to find the actual bottlenecks. Perhaps you have access to Visual Studio Profiler? http://msdn.microsoft.com/en-us/magazine/cc337887.aspx There are plenty of good profilers out there.
Otherwise, my first stop would be pagination to bring back less records from the db, which is easier on the connection and the memory footprint. Take a look at this (I'm assuming you're on SQL Server >= 2005)
http://www.15seconds.com/issue/070628.htm
I"m not sure from the question exactly what UI problem you are trying to solve.
If it's that the customer can't work with a table that is just one big amorphous blob, then let him sort on the fields: order date, order number, your SKU number, his SKU number maybe, and I guess others,too. He might find it handy to do a multi-column stable sort, too.
If it's that the table headers scroll up and disappears when he scrolls down through his orders, that's more difficult. Read the SO discussion to see if the method there gives a solution you can use.
There is also a JQuery mechanism for keeping the header within the viewport.
HTH
EDIT: plus I'll second #Iain 's answer: do some profiling.
Another EDIT: #Scott Bruns 's answer reminded me that when we started designing the UI, the biggest issue by far was limiting the number of records the user had to look at. So, yes I agree with Scott that you should give the user some way to see only a limited number of records right from the start; that is, before he ever sees a table, he has told you a lot about what he wants to see.
Stupid question, but have you asked the users of your application for input on what records that they would like to see initially?

Dynamic filtering, am I doing it wrong?

So I have an umbraco site with a number of products in it that is content managed, I need to search/filter this dataset on the front end based on 5 criteria.
I'd estimate I will have 300 products. I need to filter this data very fast and hide show options that are no longer relevant based on the previous selections.
I'm currently building a webservice and jquery implementation using AJAX.
Is the best way to do this to load it into a javascript data structure and operate on it there or will AJAX calls be fast enough? Obviously this will mean duplicating the functionality on the server side for non-javascript users.
If you need to filter the data "very fast" then I imagine the best way is to preload all the data then manipulate it client side. If you're waiting for an Ajax response every time the user needs to filter the data then it's not going to be as fast as filtering it on the client (assuming they haven't got an ancient computer running IE6).
It would depend on the complexity of your filtering. If all your doing is showing results where, for example, the product's price is greater than $10, then that will definitely be much faster. If you're going to be doing complex searches then it's possible that it could be faster to process serverside. The other question is how much data is saved for each product - preloading a few hundred products with a lot of data may take some time.
As always, the only way you'll truly be able to answer this question is by profiling the two solutions.

Categories

Resources