i wonder if content loaded dynamically by AJAX affect SEO/ability for search engines to index the page?
i am thinking of doing a constantly loading page, something like the Tumblr dashboard where content is automatically loaded as the user scrolls down.
A year later...
A while back Google came out with specifications for how to create XHR content that may be indexed by search engines. It involves pairing content in your asynchronous requests with synchronous requests that can be followed by the crawler.
http://code.google.com/web/ajaxcrawling/
No idea whether other search giants support this spec, or whether Google even does. If anybody has any knowledge about the practicality of this method this I'd love to hear about their experience..
Edit: As of today, October 14, 2015, Google has deprecated their AJAX crawling scheme:
In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. ... Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.
H/T: #mark-bembnowski
Five years later...
Latest update on SEO AJAX:
As of October 14, 2015
Google now is able to crawl and parse AJAX loaded content.
SPA or other AJAX rendered page no longer needed to prepare two versions of websites for SEO.
Short answer: It depends.
Here's why - say you have some content that you want to have indexed - in that case loading it with ajax will ensure that it won't. Therefore that content should be loaded normally.
On the other hand, say you have some content that you wish to index, but for one reason or another you do not wish to show it (I know this is not recommended and is not very nice to the end user anyway, but there are valid use cases), you can load this content normally, and then hide or even replace it using javascript.
As for your case where you have "constantly loading" content - you can make sure it's indexed by providing links to the search engines/non-js enabled user agents. For example you can have some twitter-like content and at the end of it a more button that links to content starting from the last item that you displayed. You can hide the button using javascript so that normal users never know it's there, but the crawlers will index that content (by clicking the link) anyway.
If you have some content loaded by an Ajax request, then, it is only loaded by user-agents that run Javascript code.
Search-engine robots generally don't support Javascript (or not well at all).
So chances are that your content that's loaded by an Ajax request will not be seen by search engines crawlers -- which means it will not be indexed ; which is not quite good for your website.
Crawlers don't run JavaScript, so no, your content will not be visible to them. You must provide an alternative method of reaching that content if you want it to be indexed.
You should stick to what's called "graceful degradation" and "progressive enhancement". Basically this means that your website should function and content should be reachable when you start to disable some technologies.
Build your website with a classic navigation, and then "ajaxify" it. This way, not only is it indexed correctly by search engines, it's also friendly for users that browse it with mobile devices / with JS disabled / etc.
Two years later, Bing and Yahoo search engines also now support Google's Ajax Crawling Standard. Information over the standard can be found here: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started.
The accepted answer on this question is no longer accurate. Since this post still shows in search results, I'll summarize the latest facts:
Sometime in 2009, Google released their AJAX crawling proposal. Other search engines added support for this scheme shortly thereafter. As of today, October 14, 2015, Google has deprecated their AJAX crawling scheme:
In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. ... Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.
Related
I was convinced that single page applications could not be fetched by google unless the server provided alternative content.
reading this article made me think that while it was true, nowaday it is an error to consider that javascript templating block google's crawling : https://googlewebmastercentral.blogspot.fr/2015/10/deprecating-our-ajax-crawling-scheme.html
Times have changed. Today, as long as you're not blocking Googlebot
from crawling your JavaScript or CSS files, we are generally able to
render and understand your web pages like modern browsers.
I tested with a sample app. with this tool : https://www.google.com/webmasters/tools/googlebot-fetch?utm_source=support.google.com/webmasters/&utm_medium=referral&utm_campaign=6155685
and it worked : google saw my content (whose rendering was triggered by a jquery plugin waiting the dom document ready event to render content with handlebarjs)
So here is the question : what is the state of the art in 2016? (aka : are the sigle page applications referenced by google, and is there a drawback?)
a teamate of mine told me this, I quote hime without any opinion about his testimonial :
I saw on a podcast who ran test showing that the results are
inconsistent : one time the page is corectly indexed, the oter time it
is not. IMHO Google is able to read JS pages but it consume too much
ressources so it is not systematicly done.
Beware also, they annouced
that they was going to stop indexing not visible content like those
shown on click/rollover
To conclude : I think that those pages are indexed but with a lower score.
Alright so I've been writing Backbone.js apps for over a year now and I love the framework model. I've learned how to avoid all the pitfalls and such, but there's one area I'm still quite weak as a single page app developer: how to SEO a public facing app.
I'm working on a blog project, and the easiest solution to my mind is to have a server generated list of all blog entries visible as a link from the /blog section that is rendered on page load, and to ensure that when hitting a /blog/:id url, the server loads the blog content into the very first div on the page, which will be set as display:none.
My question is if this should be sufficient for a good search engine index? SEO is still my weakest skill as a developer. Are there techniques for making sure a search engine crawls this content first and is able to use that content for its more complex indexing?
Also, is there a way to blacklist the generated app content on the page as I know Google has been testing crawling JavaScript apps? In my mind that could never be done at the level it needs to be without some sort of standard browser level event that can be triggered on a full page render or after all data has been loaded.
Anyways, this is more of an ambiguous ticket I know, but it could end up being useful to people in the future if we get a collection of good answers here.
Most of the major search engines (including Google) are rendering the content they receive from the website, in our (Google's) case with something close to a headless browser, so whatever you do for the users the search engines will also get it. Serving different stuff to search engines however will get you into a dangerous area, named cloaking.
Hiding the content with a display:none might backfire on you. We are giving hidden content way less weight in ranking.
If I decided to use some javascipt in my website like
$('#body').load(URL);
or
$.get(URL, {param:value}, function(){ ... });
or
window.title = 'TEXT';
Is it good for SEO? Or am I recommended to use pure PHP for data on the page for SEO purposes?
The question of if javascript is good for SEO or not is missing the point. We should pretty much assume that any content which is only available by javascript will not be crawled by the search engines. Google at least claims to be able to crawl some javascript only content but is fairly tight lipped about what exactly they can crawl. Other search engines probably don't crawl it and it's certainly the case that not all do. So assume it doesn't get crawled.
That doesn't mean it's bad for SEO.
If the content will contribute to your SEO, then it's bad for SEO. If the content is neutral to SEO, then it's neutral for SEO. So the answer to your question really depends on the nature of your content. If the content is part of your SEO campaign, then stick with server-side HTML generation be it PHP or some other method. Otherwise the question of SEO has no bearing on the decision to to use javascript or not. Accessibility would be another thing to take into account. Javascript only content is terrible for that.
The larger search engines can/do render limited amounts of javascript. However, for SEO purposes your best bet is rendering the content via HTML rather than javascript. A good rule of thumb is to utilize HTML for content/expressing limited content structure (e.g. paragraph type text = p, lists = ul/ol, headings = h1/h2/h3, etc...), CSS for presentation, and JS for client side programming. With that being said, always ensure a good user experience first. If you can do the above while providing a great user experience, great! If you can't, users first. Its likely you can keep both users and bots happy 95% of the time if you take the time to do so.
Further reading (sorry, I can only post one link as a new user):
Matt Cutts Interview (Check out #26 on Google Javascript Rendering)
A spider's view of Web 2.0
EDIT Added that for "a new user" ;) ~ drachenstern
I think first you should consider what SEO means. It means "Search Engine Optimization" ... how does a search engine get data in the first place for it to be optimized?
It does a GET on the page and whatever data is returned in the GET is processed. No JS engine. No POST data. So you should be optimizing for whatever data is returned on a GET.
Additionally, you tagged this with PHP, but the question has nothing to do with PHP.
Have you seen any of the questions on this list?
https://stackoverflow.com/search?q=javascript+seo
No Sir, Google does not translate flash and java script properly so it may not crawl those area using java script or flash content. I suggest you should keep your website simple but if it is necessary to keep flashy/java script content then you should keep a text base backup.
The first thing you should be asking is not what is good for SEO, but what is good for users. For users, loading data with JavaScript will give them an interactive page, where they can start seeing the page immediately while it is still loading and where the page can update without having to reload it.
From Google's Webmaster Guidelines and article on Cloaking, you should not assume that crawlers can understand JavaScript. This does not mean that you should not use JavaScript on your website, but rather that you should provide the textual equivalent in noscript tags, for use both by users with JavaScript disabled
as well as for crawlers, bearing in mind that the content of these noscript tags should be roughly equivalent to what was shown with JavaScript enabled; showing different content to users and to search engines is called "cloaking" and is frowned upon to say the least.
Google doesn't (yet) execute a page's Javascript (JS). So if your JS replaces/creates content on a page then the content would normally be invisible to the crawlers (not good).
But, the Googlers have implemented a url hack that enables your server to create pages (from the server, not from JS), with all the different varients of your JS page's content.
This solves the SEO problem of Ajax powered pages. At least for Google searches...
See Crawable Ajax
Javascript or any scripts for that matter should never be used to house your sites content, ever! The entire web is driven by HTML and CSS, and in rare cases XML languages, everything else is a headache when it comes to SEO. Ask yourself this question, what exactly is SEO and what is it that search engines are indexing? Javascript and all programming/scripting languages are proprietary, this means that they are NOT standards as defined by the W3C, which means they are essentially worthless when it comes to indexing content. On the other hand, HTML, CSS, and XML are real standards developed for the web! It's ok to use scripts to add additional functionality to your pages, embed apps like social networking plugins, etc, but you should never use them to hold your websites HTML, CSS, or actual content ever, for any reason. Here's a link to a good article that will explain why you should be using HTML and CSS, and not a million scripts, optimizing webpages using proper html markup. Scripts cause other problems besides code that is hard for search engines to decipher. For one, they are harder for browsers to process, causing pages to load much slower than "static" pages made with HTML and CSS would. Pages made with PHP tend to create "dynamic" URL's that users and search engines cannot read. This is why Google recommends people who use jsp or PHP for their webpages include a sitemap, otherwise your links will never be found and might as well not exist. Stick to the conventions! Lets face it, we have standards for a reason. If every electronic component in your home required had a different type of plug that required a special socked, and all those devices had differing voltage and amperage requirements, what would happen? You would essentially burn down your house! And, you'd be spending 5 hours a day at the hardware store looking for those special adapters to fit your wall sockets with. If you plan on designing a website, use scripts for embedding apps or connecting with a database only, and use HTML and CSS to build "static" webpages. Also, use text links, as they are both human and search engine readable, and easy to index and make sense of. Never use scripts for your links. Programming and scripting can be fun, but not on the internet its not.
Search engines index HTML, CSS, and content (multi-media, graphics, videos, text, thats it!) everything else is pointless and annoying to both users and search engines alike. For best results use XML and design a custom language.
Google can crawl, index and rank javascript generated content.
But... it uses an old Chrome version (42) with an old javascript render engine.
The consequence is that your javascript code needs to work in older browsers and older chrome versions (older than 42). So no fancy ES6 functions, you need to use polyfills or use Babel for example.
Although you can do a lot with javascript (like click events or inject your mobile menu), it's recommended to still use a-href instead of a button with a javascript event and then using a function to get to a new page.
You can check the mobile testing tool from Google: https://search.google.com/test/mobile-friendly and check the errors/warnings/logs. If the rendered output looks like intended, Google will see your content.
In search console you can also ask to index the page. Sometimes the javascript crawler is first, sometimes the 'classic' crawler.
Doublecheck it some days afterward by googling a sentence or paragraph from your page.
There's no answer about if it's better or not. Content is content and Google should rank your website, SPA, PWA, AMP site, PDF document, online Doc, wikipage, and so on based on their content, not on the underlying technique.
If you are familiar with JavaScript, give it a go.
Regards, Peter
There have been many debates about this topic already here, but none of them fully answered my question so I figured I would pose it and hope I get one or two decent answers.
We're planning on relaunching our company website in the next few months. Our current site, for the most part, is text-driven and because of this we rank very well on Google, Yahoo, and Bing for our primary keywords. We want to increase the "Wow Factor" of the site a bit (we're an interactive agency) but still maintain a majority of our search engine footing. The option to use Flash, AJAX, and other technologies that are not considered to be search engine friendly have come up numerous times in our meetings and each time we have to evaluate what kind of impact it would have on us from an SEO perspective.
Assuming a good portion of the site content will be encapsulated within a Flash (swf) file, what would be the best course of action for maintaining current rankings? I've read numerous times that Google indexes Flash files but I am unsure as to what extent. Further, is there a method of telling Google not to index a Flash file (through a variable or otherwise)?
Finally, I had an idea that seemed sound in theory and wanted to put it out into the world and see what type of feedback I receive on it:
Again, assuming the whole page is in a Flash file living on index.html, would it be possible to build out the site as normal (set up a logical directory structure, add content to static pages within said structure, etc), specify paths to those static pages in a Google XML Sitemap file, and have the spiders crawl only those pages (which are rich in content) while the user experiences some concoction of Flash/Javascript/AJAX/etc? If this works, what would be the pros/cons of this solution? Thanks for bearing with me on this slightly off-kilter question.
Well referencing Google I found that they have made impressive strides into indexing Flash based web pages. The only limitation I found from reading the article is that they are currently still limited in their ability in these three areas:
Googlebot does not execute some types of JavaScript. So if your web
page loads a Flash file via
JavaScript, Google may not be aware of
that Flash file, in which case it will
not be indexed.
We currently do not attach content from external resources that are
loaded by your Flash files. If your
Flash file loads an HTML file, an XML
file, another SWF file, etc., Google
will separately index that resource,
but it will not yet be considered to
be part of the content in your Flash
file.
While we are able to index Flash in almost all of the languages found on
the web, currently there are
difficulties with Flash content
written in bidirectional languages.
Until this is fixed, we will be unable
to index Hebrew language or Arabic
language content from Flash files.
By the sounds of it you won't have any problems with any of the 3 'problems'. Based on this document Flash sounds like a viable option for you.
Adobe has been working on their end as well to accommodate the search engines in their stride to make SWFs more search engine friendly as well. So with the combined efforts of both Adobe and Google/Yahoo if you take a dip in ranking within a year or two the search algorithms will be better than they are even now.
As far as not indexing you should be able to add in a simple
User-agent: *
Disallow: /directory/
Disallow: /directory/page.html
to your robots.txt file.
Andrew,
I've had to deal with this sort of thing a few times and I'd recommend maintaining both a Flash site (for users) and an HTML site (for search engines). Here's how you do it:
With whatever server-side stuff you're using set up some kind of switch that determines whether a particular request is for HTML or for whatever your Flash movie consumes (XML, JSON, another SWF, whatever). Every page on your site should be able to return HTML and whatever you choose to feed your Flash movie. A query string parameter like "requestType=Flash" will work just fine.
Put all of the content in your HTML pages in a div tag and make the div invisible with CSS. Use SWFObject to check if the requesting browser supports Flash and, if it does, have SWFObject replace your HTML content with your Flash movie. Search engine spiders will ignore your scripts and simply crawl your HTML pages and if you'd like to show the HTML to users with browsers that don't support Flash (like mobile browsers), just make the HTML content visible after SWFObject has determined that the browser doesn't support Flash.
Once your Flash movie has loaded, have it request whatever data it needs from the server using the same URL of the page that it was loaded on, but with the addition of the switch variable above.
Handle navigation from that point on with SWFAddress. When a user clicks a button to request a new page, pass the request through SWFAddress first, which will update the browser history using the hash mark trick, and then have your Flash movie make its request to the server.
I'm currently working on a site for a friend that uses this technique here (I should note, to protect my pride, that the site is still very much a work in progress):
http://www.casabarbuenosaires.com/
A browser request to any page on the site will first return the HTML representation of that page (you can view source in your browser to see that). SWFObject then replaces the HTML content with a Flash movie that loads a custom XML description of the same page which the Flash movie then constructs and displays.
I've worked on sites in the past that have used this technique and gotten excellent search engine results. Since you don't need to worry too much about what your HTML site looks like to humans, you can focus solely on what it looks like to search engines.
Another added benefit of building your site this way is that you are compelled to separate your site's content/copy from its visual representation. Throwing your entire site into a single SWF is generally NOT a good way to do that. It's much easier to maintain (or re-skin or scrap) a site when your content isn't all mixed up with your code.
Hope this helps,
Scott
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm building a web application with the Zend Framework. I have wanted to include some AJAX type forms and modal boxes, but I also want my application to be as accessible as possible. I want my application to be enhanced by AJAX, but also fully functional without AJAX.
So as a general guideline...when should I not use AJAX? I mean, should I bother making my application usable without AJAX? Or does everyone have AJAX enabled browsers these days?
If you mean "accessible" in the ADA sense, AJAX is usually a no-no - your site should provide all its content and core functionality using only standard (X)HTML and CSS. Any javascript used should merely extend the core functionality, and your site should be coded to work elegantly in the absence of a javascript-enabled browser.
Examples: if you want a user to click on a thumbnail and get a full-size version of the image as a result, you can make the thumbnail a link. Then, the onclick event will fire a JQuery method that cancels the navigation behavior of the link and pops up a JQuery floating div to show the image on the current page. If the user's browser doesn't support JavaScript, the onclick event will never fire, and the user will be presented the image in a new page. The core functionality is the same with or without scripting.
EDIT: Skeleton example, sans JQuery-specific code.
<html>
<body>
Some URL
</body>
</html>
To cancel the navigation operation, simply make sure that the method invoked by the onclick event returns false at the end.
A neat example of the JQuery image popup I described can be found here.
Use ajax if it adds value for the user.
If the ajax version adds a lot more value than the non-ajax version then it might justify the expense to develop a solution that caters for both clients. Generally i wouldn't recommend doing the extra work (remember.. more code results in more maintenance).
I think one point is missing here: Use Ajax only for content any search engine does not need to know.
98% of users will have AJAX enabled browsers.
A significant percentage of those people won't have it turned on when they first visit your site though (or at all, ever perhaps).
I've seen websites that look like a blank page without javascript on. Don't be one of them. Javascript to fix layout issues is a horrible idea in my opinion. Make sure it loads and looks ok without Javascript. If people can atleast see what they are missing out on, they are likely to switch it on, but if your website looks like it's just broken, then...
I often have noscript block Flash and JavaScript until I make the decision that your site is worthy.
So be sure to tell me what I'm missing if I have JavaScript turned off.
It depends on the complexity of your web application.
If you can, having it functional with javascript disabled is great, because it makes your application usable not only by users on js-disabled browsers but also by robots. The day you decide to write an application to automatically fill your forms, for example, you don't have to write an API from the ground up.
In any case, do not user AJAX for EVERYTHING! I have just inherited a project that basically consists of a single page that is populated by a ton of AJAX calls and I can tell that you just thinking about it gives me physical pain. I guess the original developer didn't like the concept of using the back/forward button in the browser as a mean of navigation.
Unless you are targeting mobile devices or other non-standard web users, you can be fairly sure that the vast majority has Javascript enabled, because most major sites (including SO) rely heavily on it.
I want my application to be as accessible as possible.
You can do things like rendering your modals and forms as a page that can operate standalone.
The AJAX version pulls the template into a modal/container, the standalone version checks if it's an AJAX request and renders the page including the header/footer (this can occur from the same URL if planned well)
The AJAX version intercepts the submit and does AJAX submission then provides an inline thank you, the non-AJAX opens a thank you page. Once again you can likely use the same pages for each of these functions if thought out correctly.
Reusing templates and URL's helps avoid additional maintenance for the AJAX/non-AJAX versions.
I want my application to be enhanced by AJAX, but also fully
functional without AJAX.
Thinking through the structure of your URLs and templates can go a long way towards this, if you make most of your AJAX requests pull in completely rendered templates (as opposed to just data) then you can usually use the same URL to serve both versions. You just serve only the guts of the modal/form to the AJAX request and the entire page to a regular request.
When should I not use AJAX?
You should not use AJAX if doing so will cause a poor experience for a significant portion of your user base (there are of course techniques that can be used to mitigate this)
You should not use AJAX if the development time associated with implementing it will be too significant to justify the improvements in user experience
You should not use AJAX for content which has significant SEO value without implementing an appropriate fallback that allows it to be indexed (Crawlers are improving constantly but it's still a good idea)
I mean, should I bother making my application usable without AJAX? Or
does everyone have AJAX enabled browsers these days?
I'd say a lot of the time it's unnecessary as the vast majority of users will have AJAX enabled browsers, but there are scenarios where it's critical such as SEO optimization or when a large portion of your user base is likely to use browsers that are less likely to support Javascript as well or where they're likely to have Javascript/AJAX disabled.
A few examples of these scenarios:
A website for a company or government that uses an outdated browser as standard
A website where a large portion of the users may be disabled in a manner that may negatively impact their experience such as a website for vision or motor-skill impaired people may be negatively impacted by updating content via AJAX especially if it occurs rapidly.
A site accessed regularly via a less common device or browser that will cause a negative impact to a large portion of users
So what should I do?
Think about who is going to be using the site, how they're going to access it, and what they're going to access it with. Also try to think about not just the present but also the future.
Design the site in a manner that will cater to the majority of these users.
Think who will gain and who will loose based on my decision to use AJAX and if in doubt have a look at your analytics data to help weigh up the decision and if you lack the data it may be worth updating your tracking and obtaining a sample to aid the decision
Think does my decision to use AJAX cause any contradictions with core requirements for this project
Use AJAX to enhance content where possible as opposed to making it mandatory ie the content should work with or without JS/AJAX
Consider the additional development time involved with the use of AJAX (if any)
My experience is, we should use ajax after it works without it. For a couple of reasons.
First, if something breaks in the ajax, and you don't have it working without it, the site simply doesn't work. For example, a product list with pagination. It should work with the links alone, then use ajax when possible.
Second, for site indexing and accessibility. If it works without ajax, it's better.
And it's easier to break something (even if only for a few moments). A bad piece of code, an uncaught exception, an external library not loaded, a blocking browser extension,...
After everything works without ajax, its quite easier to add ajax. Just have the ajax catch the action, add ajax=1 and when returning the result, return only what you need if ajax=1, otherwise return everything.
In the product list example, I would only return the products and pagination html, and add to the correct div. If ajax stops working, the whole page is loaded and the customer sees the second page as it loads.
Ajax adds a lot of value to UX. If done right, the user gets a great feel when using the site, and better data usage because it doesn't load the whole page everytime.
But the question being "when not to use ajax", I would say, you should always count on it to improve UX but not rely on it for the site to work (as other users also mentioned). And nowadays we need both, great code and great user experience.
My practice is to use two main pages, let's say index.py and ajax.py. First one is responsible for generating full website, and is default target of forms. Other one generates only output specific for adequate ajax query. Logic behind both of them is the same, only the method of generating output is a bit different.
In jquery I simply change action parameter when sending a request. It works both with and without ajax, although long time have I not seen someone with disabled js and ajax.
I like the thought of coding your application without JavaScript / Ajax and then adding it in later to enhance the UI without depriving users of functionality just because they don't have JavaScript enabled. I read about this in Pro ASP.NET MVC but I think I've seen it elsewhere in reading about unobtrusive JavaScript.
You should not make your service bloated with web 2.0 effects like accordion, modal/etc forms, image zoomers etc.
Use modern tech smarter (AJAX is one of them) and your users will be happy. Do not fear AJAX -- this is very good thing to make user expirience smooth. But don't do things because you like it - do them because your user need it ;)
When you want to make a website that looks like a website, not a fugly imitation of a desktop app?
You should not use AJAX or JavaScript in cases where:
your system needs to be accessible
your system needs to be search friendly
However, by using a modern JS framework with some solid "unobtrusive" practices, you can progressively enhance pages so that they remain accessible and search-friendly while offering a slick UI to users.
This totally depends on the type of application or feature you're developing. If it is crucial that the application is accessible despite the absence of Javascript, then it would help to have fallback methods (i.e. alternative forms) to allow your user to use said functionality/feature. For that, it will require you to invest some of your time developing methods for collecting information not just using client-side scripts but also on the server-side.
For miscellaneous features that only serves to enhance user experience, it's mostly not worth it to develop fallback methods.
There's no reason to totally not use AJAX. AJAX helps minimize your traffic after all.
You can if you wish always use AJAX and update the history state using Push State or for more compatibility use the hash with none HTML5 compliant browsers.
with this you can have your server load a page then javascript read the document.hash and resume the state of the application base on the state of the hash.
for example i got to /index.html i click into something for example a client to open the view client you can change the hash to #/view/client/{client_id}/ then if a reload or go back using the browser the hash with change and you can use the onhashchanged event to capture it and match the sites state to the new hash then same if a favorite a certain state
A couple of other scenarios where one may be better off NOT using AJAX:
Letting someone to log into the web application. Use traditional form submit instead.
Searching and returning more than a few 100 rows from the database. Either break the process down or let the server side language handle it.