JavaScript autocomplete upon typing - javascript

Right guys, all autocomplete plugins and functions that I've found, they only update upon keyup/down, etc. This is fine but the search only begins occurring once the user has stopped typing and if they are typing a phrase or word, the script is unable to instantly start suggesting, etc.
I know this'll be a very simple fix or suggestion for some of you guys, so any help would be greatly appreciated as to how I can convert it to be instantly as a key is pressed.
An example of the desired effect is Google Suggest or Facebook search, a search is fired instantly per key press or change, how do I emulate this?
Thanks!

Is this what you mean? Or do you want Ajax to retrieve from a database?
var data = "Core Selectors Attributes Traversing Manipulation CSS Events Effects Ajax Utilities".split(" ");
$("#example").autocomplete(data);
JQuery
Edit: I'm not sure I know what you mean, because this example seems to work identical to Google Suggest or Facebook. If your database was small you could download the cache into the variable data upon page load. If your database was slightly larger you LIMIT the cache to only X number of responses for each alphabetical character or series of characters. (ie. WHERE city LIKE 'aa%' LIMIT 10 AND WHERE...)

It depends on how big the space you're searching is and how good your servers are. Facebook search for (I assume people's names) is quick because you're only really searching through a thousand or so contacts. Google is fast because they invest a lot of money in infrastructure and cache a lot of the responses.
On one of my projects I've used this jQuery plugin and it provides excellent performance on cached results. We used it to provide autocomplete functionality on a list of about 6K contacts (names, etc). Is this what you had in mind?

The Wicket web framework has the concept of a "throttling" behavior. Normally, AJAX requests in Wicket applications are queued against an "ajax channel", which triggers a request instantly if none is running. If a request is already running, the next request is queued, and triggered when the current one returns.
"Throttling" lets the behavior delay itself for a certain amount of time (say, two seconds). If the behavior fires again in the same period, the callback for the most recent behavior replaces the callback for the current queued behavior. (For example, the user starts typing "albuquerque", which triggers the events "A" then "AL", then "ALB". The system might trigger "A", then "ALB", skipping over "AL" because it was replaced by "ALB" while sitting in the queue.) The object of this is to fire a behavior instantly on each keypress, but prevent the server from being flooded with unnecessary requests.
Check out the wicket ajax source code:
https://github.com/apache/wicket/blob/wicket-1.4.8/wicket/src/main/java/org/apache/wicket/ajax/wicket-ajax.js
For more about the web framework, see:
http://wicket.apache.org

Related

Progressbar in SQL

So I learned that it's not exactly an easy task to create a progress bar for SQL queries. So I've tried to come up with an alternative solution to indicate when a task will be done. I figured I could time how long it would take to execute one query in the database and then use that information to create an estimation depending on how many queries that has to be performed and then create a progress bar from that information. I know it will not be a 100% reliable solution, because if something went wrong it would still show the progress bar, but it would at least give the user some indication when the job will be done, rather than just having a loading spinner.
Is there a better solution to this problem?
edit to answer some questions
It's the update function that takes time. I have maybe 15.000 inputs in the database and I have to update all 15.000 through an API. So first I have to pull out the id of all those and use them with an API to get the updated information, and then I have to execute all the 15.000 queries with the updated information. All this takes time. I doubt it can be done quickly. With a database of 15.000 queries it took me about 2 hours. I don't believe a spinner for 2 hours without an indication of when it will be done is realiable. I need some sort of estimation.
Note that updating the database is something that will be done very rarely. Like maybe once a week the user will update the database.
You can use reverse-ajax and then return a result into an iframe.
On the server-side, use repsonse.flush (ASP.NET) to get a partial result to the client when one iteration step is completed.
Output JavaScript code like:
window.parent.fnReportProgress(25.3)
Also, make sure you don't update on every progress message.
Instead store the messages in an array, and then poll the array every x milliseconds (setInterval) until complete. If you don't put all messages into an array, but instead update the progress bar immediately, you will get a bluescreen in IE, and only in IE. Chrome + FF work perfectly fine.
If you only need to support halfway modern browsers (IE10+) on halfway modern servers (IIS 8+), you can also use WebSockets.
WebSockets will also work on servers < IIS8, but then you'll have to develop the web-sockets server app separately, on a separate port.

How can I show recent searches done through a textbox using JavaScript/jQuery?

What I want to achieve is that when I focus on the search bar, it should show me a list of recent searches done during this session and I should be able to select one of them and it should appear on the textbook.
Any help appreciated. If possible I woul like to store these recent searches data to browser cache so that whenever I reach this website it should show me the list.
Thanks in advance.
Assuming you will be using a web based language like html or JavaScript, a good start would be to store each search in an array.
using javascript along with the jQuery library you can easily add items to an array each time a user clicks a button.
JavaScript:
var myArray = [];
myArray.push($('#yourTextBox').val());
Then you could use jquery's $.each function to display each item in a DOM element.
See the sample below: (I used HTML and javascript with jquery 1.11)
http://jsfiddle.net/1ncf0b6f/3/
TL;DR I've written a library to handle this + it's edge cases, see https://github.com/JonasBa/recent-searches#readme for usage.
You should store the recent searchees in LocalStorage and retrieve them, then decide on your implementation on how you want to render them. This obviously has some edge cases that you need to consider, which is why I wrote a library to do exactly this, see below examples
Examples
Expiration:
Consider that someone searches for a query iPhone, but has looked for a query repairing iPhone 1 month ago, that repairing iPhone query is likely obsolete.
Ranking of recent searches
Same goes for ranking when doing prefix search, if a user has made a query "apple television" 3h ago and a query "television cables" 8h ago, and they now search for "television", you want to probably implement a ranking system for the two.
Safely handling storage
Just writing to LocalStorage will result in a massive JSON that you'll need to parse every time, thus gradually slowing down your application until you hit the limit and loose this functionality.
I've built a recent-searches library which helps you tackle all that. You can use it via npm and find it here. It will help you with all of the above issues and allow you to build recent-searches really quickly!

Detect the referral/s of Url/s using JavaScript or PHP from inside a Bookmarklet

Let's think out of the box!
Without any programming skills, how can you say/detect if you are on a web page that lists products, and not on the page that prints specific details of a product?
The Bookmarklet is inserted using JavaScript in right after the body tag of a website ( eBay, Bloomingdales, Macy's, toys'r'us ... )
Now, my story is: (programming skills needed now)
I have a bookmarklet and my main problem is how to detect if I am on a page that lists products or if i am on the page that prints the product detail.
The best way that I could think, to detect if I am on the detail page of a product is to detect the referral(s) of the current URL. (maybe all the referrals, the entire click history)
Possible problem: a user adds the URL as favorite and does not use my bookmarklet, and closes the browser; then the user uses the browser again, clicks the favorite link and uses my bookmaklet and I think that I can't detect the referral in this case; it's OK, not all the cases are covered or possible;
Can I detect the referral of this link using the cache in this case? (many browsers cache systems involved here, I know)
how can you say/detect if you are on a web page that lists products, and not on the page that prints specific details of a product
I'd setup Brain.js (a neural net implemented in javascript) and train it up on a (necessarily broad and varied) sample set of DOMs and then pick a threshold product:details ratio to 'detect' (as near as possible) what type of page I'm on.
This will require some trial and error, but is the best approach I can think of (neural nets can get to "good enough" results pretty quickly - try it, you'll be surprised at the results).
No. You can't check history with a bookmarklet, or with any normal client side JavaScript. You are correct, the referrer will be empty if loaded from a bookmark.
The bookmarklet can however store the referrer the first time it is used in a cookie or in localStorage and then the next time it is used, if referrer is empty, check the cookie or localStorage.
That said, your entire approach to this problem seems really odd to me, but I don't have enough details to know if it is genius our insanity.
If I was trying to determine if the current page was a list or a details page, I'd either inspect the url for common patterns or inspect the content of the page for common patterns.
Example of common url patterns: Many 'list pages' are search results, so query string will have words like "search=", "q=", "keywords=", etc.
Example of page content patterns: A product page will have only 1 "buy" button or "add to cart", whatever. A list page will have either no such button or have many.
Why don't u use the URL? then you can do something like this http://www.le.url.com?pageid=10&type=DS and then the code will be something like this:
<?php
if(isset($_GET['type']) && $_GET['type'] == 'DS'){
// Do stuff related to Details Show
} else{
// Show all the products
}
?>
And you can make the url something like this with an .htacces file:
http://www.le.url.com/10/DS
I would say your goal should first be for it to work for some websites. Then many websites and then eventually all websites.
A) Try hand coding the main sites like Amazon, eBay etc... Have a target in mind.
B) Something more creative might be to keep a list of all currency symbols then detect if a page has maybe 10 scattered around. For instance the $ symbol is found all over amazon. But only when there is say 20 per page can you really say that it is a product listing (this is a bad example, amazon's pages are fairly crazy). Perhaps the currency symbols won't work; however, I think you can you can generalize something similar. Perhaps tons of currency symbols plus detection of a "grid" type system with things lined up in a row. You'll get lots of garbage so you'll need good filtering. Data analysis is needed after you have something working algorithmically like this.
C) I think after B) you'll realize that your system might be better with parts of A). In other words you are going to want to customize the hell out of certain popular websites (or more niche ones for that matter). This should help fill the gap for sites that don't follow any known models.
Now as far as tracking where the user came from why not use a tracking cookie type concept. You could of course use indexedDB or localstorage or whatever. In other words always keep a reference to the last page by saving it on the current page. You could also do things like have a stack and push urls onto it on every page. If you want to save it for some reason just send that data back to your server.
Detecting favorite clicks could involve detecting all AJAX traffic and analyzing it (although this might be hard...). You should first do a survey to see what those calls typically look like. I'd imaging something like amazon.com/favorite/product_id would be fairly common. Also... you could try to detect the selector for the "favorite" button on the page then add an onclick handler to detect when it is clicked.
I tried to solve each problem you mentioned. I don't think I understand exactly what you are trying to do.

How to improve performance of Jquery autocomplete

I was planning to use jquery autocomplete for a site and have implemented a test version. Im now using an ajax call to retrieve a new list of strings for every character input. The problem is that it gets rather slow, 1.5s before the new list is populated. What is the best way to make autocomplete fast? Im using cakephp and just doing a find and with a limit of 10 items.
This article - about how flickr does autocomplete is a very good read. I had a few "wow" experiences reading it.
"This widget downloads a list of all
of your contacts, in JavaScript, in
under 200ms (this is true even for
members with 10,000+ contacts). In
order to get this level of
performance, we had to completely
rethink how we send data from the
server to the client."
Try preloading your list object instead of doing the query on the fly.
Also the autocomplete has a 300 ms delay by default.
Perhaps remove the delay
$( ".selector" ).autocomplete({ delay: 0 });
1.5-second intervals are very wide gaps to serve an autocomplete service.
Firstly optimize your query and db
connections. Try keeping your db connection
alive with memory caching.
Use result caching methods if your
service is highly used to ignore re-fetchs.
Use a client cache (a JS list) to keep the old requests on the client. If user types back and erases, it is going to be useful. Results will come from the frontend cache instead of backend point.
Regex filtering on the client side wont be costly, you may give it a chance.
Before doing some optimizations you should first analyze where the bottle-neck is. Try to find out how long each step (input → request → db query → response → display) takes. Maybe the CakePHP implementation has a delay not to send a request for every character entered.
Server side on PHP/SQL is slow.
Don't use PHP/SQL. My autocomplete written on C++, and uses hashtables to lookup. See performance here.
This is Celeron-300 computer, FreeBSD, Apache/FastCGI.
And, you see, runs quick on huge dictionaries. 10,000,000 records isn't a problem.
Also, supports priorities, dynamic translations, and another features.
The real issue for speed in this case I believe is the time it takes to run the query on the database. If there is no way to improve the speed of your query then maybe extending your search to include more items with a some highly ranked results in it you can perform one search every other character, and filter through 20-30 results on the client side.
This may improve the appearance of performance, but at 1.5 seconds, I would first try to improve the query speed.
Other than that, if you can give us some more information I may be able to give you a more specific answer.
Good luck!
Autocomplete itself is not slow, although your implementation certainly could be. The first thing I would check is the value of your delay option (see jQuery docs). Next, I would check your query: you might only be bringing back 10 records but are you doing a huge table scan to get those 10 records? Are you bringing back a ton of records from the database into a collection and then taking 10 items from the collection instead of doing server-side paging on the database? A simple index might help, but you are going to have to do some testing to be sure.

How can I estimate browser's Javascript capabilities?

I serve a web page which makes the client do quite a lot of Javascript work as soon as it hits. The amount of work is proportional to the amount of content, which varies a lot.
In cases where there is a huge amount of content, the work can take so long that clients will issue their users with one of those "unresponsive script - do you want to cancel it?" messages. In cases with hardly any content, the work is over in the blink of an eye.
I have included a feature where, in cases where the content is larger than some value X, I include a "this may take a while" message to the user which is displayed before the hard work starts.
The trouble is choosing a good value for X since, for this particular page, Chrome is so very much faster than Firefox which is faster than IE. I'd like to warn all users when appropriate, but avoid putting the message up when it's only going to be there for 100ms since this is distracting. In other words, I'd like the value for X to also depend on the browser's Javascript capabilities.
So does anyone have a good way of figuring out a browser's capabilities? I'm currently considering just explicitly going off what the browser is, but that seems hacky, and there are other factors involved I guess.
If the data is relatively homogeneous, one method might be to have a helper function that checks how long a particular subset of the data has taken to go through, and make a conservative estimate of how long the entire set will take.
From there, decide whether to display the message or not.
This may not be where you want to go, but do you have a good idea why the javascript can take so long? Is it downloading a bunch of content over the wire or is the actual formatting/churning on the browser the slow part?
You might even be able to do something incrementally so that while the whole shebang takes a long time but users see content 'build' and thus don't have to be warned.
Why not just let the user decide what X is? (e.g. like those "display 10 | 20 | 50 | 100" per page choosers) Then you don't have to do any measurement/guesswork at all; you can let them make the optimal latency / information content tradeoff.
This is somewhat misleading; usually when one discusses a browser's JS capabilities, it's referring to the actual abilities of the browser, such as does it support native XMLHTTP? Does it support ActiveX? etc.
Regardless, there is no way to reliably deduce the processing power or speed of a browser. One might think that you could run some simple stress-tests, compute the result and compare to a list of past performances to see where the current user's browser ranks, and possibly use this information to arrive at an estimated time. The problem here, is that these calculations can not only be influenced by activities in the browser (or merely on the OS); for instance, you run your profiling script, and the user's AV scanner starts up because its 5pm; what normally might take 2s, takes 20s.
On thing to ask yourself, is: Does this processing have to take place right NOW? As n8wrl and Beska alluded to, you might need to code your own method whereby you break-up the work to be done into chunks and then you operate on them one at a time and using something like setTimeout(). This will give the engine time to 'breathe' -- and thus hopefully avoid the 'unresponsive script' warnings. Each of these chunks could also be used to update a progress bar (or similar) that gives the user some indication that work is being done.
Or you could take the approach like GMail - they flash a very small, red "Loading..." text area in the corner of the window. Sometimes its there for a few seconds, sometimes it's not there long enough to read it. Other times it blinks on-and-off several times. But you know when its doing something.
Lastly, also on the point of incrementally 'building' the page, you could inspect the source of Chrome's new tab page. Note: you can't view this using "view source"; instead, choose the "javascript console" option (while on the new tab page) and then look at the HTML source there. There should be a comment that explains their general strategy, like such:
<!-- This page is optimized for perceived performance. Our enemies are the time
taken for the backend to generate our data, and the time taken to parse
and render the starting HTML/CSS content of the page. This page is
designed to let Chrome do both of those things in parallel.
1. Defines temporary content callback functions
2. Fires off requests for content (these can come back 20-150ms later)
3. Defines basic functions (handlers)
4. Renders a fast-parse hard-coded version of itself (this can take 20-50ms)
5. Defines the full content-rendering functions
If the requests for content come back before the content-rendering functions
are defined, the data is held until those functions are defined. -->
Not sure if that helps, but I think it does give insight into how some of the big players handle challenges such as this.

Categories

Resources