I designed a website in which the whole site is contained within one page (index.php).
Within the page, <section> tags define different parts of the site (home, contact, blog etc.)
Navigation is achieved by buttons that are always visible, and when clicked use javascript to change the visibility of the sections, so that only one is shown at any time.
More specifically, this is done by using the hash in the url, and handling the hashchange event. This results in urls such as www.site.com/#home (the default if no other hash is present) and www.site.com/#contact.
I want to know if this is a good design. It works, but I get the feeling there must be a better way to achieve the same thing? To clarify, I was aiming for site that loaded all the main content once, so that there were no more page loads after the initial load, and moving between sections would be smoother.
On top of this, another problem is introduced concerning SEO. The site shows up in google, but if for example, a search query contains a term in a specific section, it still loads the default #home page when clicked, not the specific section the term was found in. How can I rectify this?
Finally, one of the sections is a blog section, which is the only section that does not load all at once, since by default it loads the latest post from a database. When a user selects a different post from a list (which in itself is loaded using AJAX), AJAX is used to fetch and display the new post, and pushState changes the history. Again, to give each post a unique url that can be referenced externally, the menu changes the url which is handled by javascript, resulting in urls such as www.site.com/?blogPost=2#blog and www.site.com/?blogPost=1#blog.
These posts aren't seen by google at all. Using the Googlebot tool shows that the crawler sees the blog section as always empty, so none of the blog posts are indexed.
What can I change?
(I don't know if this should be on the webmasters stackexchange, so sorry if its in the wrong place)
Build a normal site. Give each page a normal URL. Let Google index those URLs. If you don't have pages for Google to index, that it can't index your content.
Progressively enhance the site with JS/Ajax.
When a link is followed (or other action that, without JS, would load a new page is performed) use JavaScript to transform the current page into the target page.
Use pushState to change the URL to the URL that would have been loaded if you were not using JavaScript. (Do this instead of using the fragment identifer (#) hack).
Make sure you listen for history events so you can transform the page back when the back button is clicked.
This results in situations such as:
User arrives at /foo from Google
/foo contains all the content for the /foo page
User clicks link to /bar
JavaScript changes the content of the page to match what the user would have got from going to /bar directly and sets URL to /bar with pushState
Note that there is also the (not recommended) hashbang technique which hacks a one-page site into a form that Google can index, but which is not robust, doesn't work for any other non-JS client and is almost as much work as doing things properly.
Related
I'm trying to create a gallery that allow custom url rather than url prefix with hashtag.
For example:
http://www.myportfolio.com/gallery/3
rather than
http://www.myportfolio.com/gallery#3
so far everything is working fine, if I access from http://www.myportfolio.com/gallery I was able to go to the next and previous image with the url updated.
My main issue now is although the url is now dynamic but it still cannot be bookmarked, if I enter http://www.myportfolio.com/gallery/4 to go the 4th image it doesn't work.
Is there a Javascript approach to this or do you need a combination of PHP to redirect the url?
It is possible to use client side JavaScript to handle this, although you'll need to set up the server so that every URL (that isn't for something like an image or script) loads the bootstrap document your SPA runs on. You just need to check location.href when the page loads and then set up the content you want.
That said, doing so is a very bad idea that completely misses the point of using pushState and friends in the first place.
The two points of being able to have a normal URL are that:
Clients where the JavaScript fails still get a useful page
The content for that URL is loaded in the initial page load (so it is available faster)
If you aren't going to take advantage of that, you might as well go back to hashbangs.
I am aware there are several other similar types of questions about pushState and SEO, but I cannot find one asking about this issue.
If I have a page with url site.com/Product/Detail2, that loads all the "pages" associated with site.com/Product into it and then scrolls Detail2 into view, will it cause problems with SEO if there are links like site.com/Product/Detail1 and site.com/Product/Detail3? Each of these urls will actually load the same exact content, but scroll the user to the portion of the page that detail is on similar to how fragment identifiers work. I understand Google wont run the JavaScript and will spider all those product urls, but I have read that google doesn't like different urls returning the exact same content. For example, site.com/Product/Detail1 and site.com/Product/Detail2 will both return the same content when user initial navigates to them, and code will scroll the user to the specific detail.
I don't want to have to do ajax calls to dynamically load content to avoid the different product sub urls from pulling up the exact same content. I could see a solution where navigating to each url initial loads only that one sub url's content but then gets the rest of the Product content with ajax calls. That would allow google to think each of those product urls returns unique content but the users always sees one big page that scrolls the sub urls into view when they use the nav bar.
Has anyone else thought about this specific issue and dealt with it before?
Use the canonical tag on the detail pages (ones that describe only one item and ideally have descriptive urls).
More on rel="canonical"
I have a section of a site with multiple categories of Widget. There is a menu with each category name. For anybody with Javascript enabled, clicking a category reveals the content of the category within the page. They can click between categories at will, seeing the DOM updated as needed. The url is also updated using the standard hash/hashbang (if we are being Google-friendly). So for somebody who lands on example.com/widgets, they can navigate around to example.com/widgets#one, example.com/widgets#two, example.com/widgets#three etc.
However, to support user agents without Javascript enabled, following one of these category links must load a new page with the category displayed, so for someone without javascript enabled, they would navigate to example.com/widgets/one, example.com/widgets/two, example.com/widgets/three etc.
My question is: What should happen when somebody with Javascript enabled lands on one of these URLS? What should someone with Javascript enabled be presented with when landing on example.com/widgets/one for example? Should they be redirected to example.com/widgets#one?
Please note that I need a single page site experience for anybody with Javascript enabled, but I want a multi-page site for a user agent without JavaScript. Any answer that doesn't address this fact doesn't answer the question. I am not interested in the merits or problems of hashbangs or single-page-sites vs multi-page-sites.
This is how I would structure it:
Use HistoryJS to manage the URL. JS pushstate browsers got full correct URLs and JS non-pushstate browsers got hashed urls. Non-JS users went to the full URL as normal with a page reload.
When a user clicks a link:
If they have JS:
All clicks to other pages are handled by a function that prevents the default action, grabs the HREF and passes the URL to an ajax request and updates the URL at the same time. The http response for that ajax request is then parsed and then loaded into the content area.
Non JS:
Page refreshed as normal and loads the whole document.
When a page loads:
With JS: Attach an event handler to all your links to prevent the default so their href is dealt with via Ajax.
Without JS: Nothing. Allow anchors to work as normal.
I think you should definitely have all of your content accessible via a full, correct URL and being loading it in via ajax then updating the URL to reflect the address where you got your content from. That way, when JS isn't running, you don't have to change anything.
Is that what you mean?
Apparently your question already contains the answer. You say:
I need a single page site experience for anybody with Javascript enabled
and then ask:
What should someone with Javascript enabled be presented with when landing on example.com/widgets/one for example? Should they be redirected to example.com/widgets#one?
I'd say yes, they should be redirected. I don't see any other option, given your requirements (and the fact that information about JavaScript capabilities and the hash fragment of the URL are not available on the server side).
If you can accept relaxing the requirements a bit, I see another option. Remember when the web was crowded with framesets, and we landed on a specific frame via AltaVista (Google wasn't around yet!) search? It was common to see a header saying that page was supposed to be displayed as a frame, and a link to take the user to the frameset version.
You could do something similar: when scripting is available, detect that you're at example.com/widgets/one and add a link to the single-page version. I know that's not ideal, but it's better than nothing, and maybe better than a nasty client-side redirect.
Why should you need to redirect them to a different page. The user arrived at the page looking for an answer. He gets the answer even if he has javascript enabled. It doesn't matter. The user's query has been fulfilled.
But what would happen if the user lands on example.com/widgets#one ? You would need to set up an automatic redirect to example.com/widgets/one in that case. That could be done by checking the if javascript is enabled in the onload event and redirect to the appropriate page.
One way for designing such pages is to design without javascript first.
You can use anchors in the page so:
example.com/widgets#one
Will be a link to the element with id 'one'
Once your page works without javascript, then you add the javascript layer. You can prevent links to be followed by using the event.preventDefault.
(https://developer.mozilla.org/fr/docs/DOM/event.preventDefault), then add the desired javascript functionality.
Bleacherreport has a function on their website that lets you browse between stories with arrow keys. While that is nothing spectacular, I would like to understand how they do so AND change the URL in the address bar in the browser.
It's one thing to load up new content via AJAX, but I've never seen it done alongside refreshing the URL. There is also a slide to the left animation from one content to the next.
example:
http://bleacherreport.com/articles/1295213-in-depth-look-at-the-business-behind-a-holdout
use arrow keys
They aren't really "refreshing" the URL. As you said, they are using AJAX to grab the new content, and then once it is loaded, updating the URL (probably via window.history.pushState) to match what the route for that specific article is (that way if you actually did refresh the page, you'd still be taken to the same content).
You can do this manually (with the aforementioned window.history.pushState), or there are lots of frameworks that handle client-side URL routing, such as Backbone.js and Sammy.js.
How to make Contents table for a GWT page, for the purpose of bookmarking and direct jump to a subsection of a dynamic page.
Address for my webApplication is like,
www.example.com/WebApp#param1=value1¶m2=value2
This link displays a page with many subsections, i want to provide feature for users to be able to bookmark and load subsections directly.
You can use the History class to get access to the URL after the # and react accordingly. It works really well, and is the officially recommended way of solving this problem.
A short tutorial: http://www.bluecoders.com/tutorials/gwthistory.html
Basically, History is a static class on which you can call addValueChangeHandler to register an object that should deal with any history changes. This supports direct linking (e.g. bookmarks) and also proper navigation when the user uses the back and forward buttons in the browser.