I have a section of a site with multiple categories of Widget. There is a menu with each category name. For anybody with Javascript enabled, clicking a category reveals the content of the category within the page. They can click between categories at will, seeing the DOM updated as needed. The url is also updated using the standard hash/hashbang (if we are being Google-friendly). So for somebody who lands on example.com/widgets, they can navigate around to example.com/widgets#one, example.com/widgets#two, example.com/widgets#three etc.
However, to support user agents without Javascript enabled, following one of these category links must load a new page with the category displayed, so for someone without javascript enabled, they would navigate to example.com/widgets/one, example.com/widgets/two, example.com/widgets/three etc.
My question is: What should happen when somebody with Javascript enabled lands on one of these URLS? What should someone with Javascript enabled be presented with when landing on example.com/widgets/one for example? Should they be redirected to example.com/widgets#one?
Please note that I need a single page site experience for anybody with Javascript enabled, but I want a multi-page site for a user agent without JavaScript. Any answer that doesn't address this fact doesn't answer the question. I am not interested in the merits or problems of hashbangs or single-page-sites vs multi-page-sites.
This is how I would structure it:
Use HistoryJS to manage the URL. JS pushstate browsers got full correct URLs and JS non-pushstate browsers got hashed urls. Non-JS users went to the full URL as normal with a page reload.
When a user clicks a link:
If they have JS:
All clicks to other pages are handled by a function that prevents the default action, grabs the HREF and passes the URL to an ajax request and updates the URL at the same time. The http response for that ajax request is then parsed and then loaded into the content area.
Non JS:
Page refreshed as normal and loads the whole document.
When a page loads:
With JS: Attach an event handler to all your links to prevent the default so their href is dealt with via Ajax.
Without JS: Nothing. Allow anchors to work as normal.
I think you should definitely have all of your content accessible via a full, correct URL and being loading it in via ajax then updating the URL to reflect the address where you got your content from. That way, when JS isn't running, you don't have to change anything.
Is that what you mean?
Apparently your question already contains the answer. You say:
I need a single page site experience for anybody with Javascript enabled
and then ask:
What should someone with Javascript enabled be presented with when landing on example.com/widgets/one for example? Should they be redirected to example.com/widgets#one?
I'd say yes, they should be redirected. I don't see any other option, given your requirements (and the fact that information about JavaScript capabilities and the hash fragment of the URL are not available on the server side).
If you can accept relaxing the requirements a bit, I see another option. Remember when the web was crowded with framesets, and we landed on a specific frame via AltaVista (Google wasn't around yet!) search? It was common to see a header saying that page was supposed to be displayed as a frame, and a link to take the user to the frameset version.
You could do something similar: when scripting is available, detect that you're at example.com/widgets/one and add a link to the single-page version. I know that's not ideal, but it's better than nothing, and maybe better than a nasty client-side redirect.
Why should you need to redirect them to a different page. The user arrived at the page looking for an answer. He gets the answer even if he has javascript enabled. It doesn't matter. The user's query has been fulfilled.
But what would happen if the user lands on example.com/widgets#one ? You would need to set up an automatic redirect to example.com/widgets/one in that case. That could be done by checking the if javascript is enabled in the onload event and redirect to the appropriate page.
One way for designing such pages is to design without javascript first.
You can use anchors in the page so:
example.com/widgets#one
Will be a link to the element with id 'one'
Once your page works without javascript, then you add the javascript layer. You can prevent links to be followed by using the event.preventDefault.
(https://developer.mozilla.org/fr/docs/DOM/event.preventDefault), then add the desired javascript functionality.
Related
We provide live chat service to our customers. customer just copy some code and put in their footer. then they can have video chat, cobrowsing and many things..
but problem comes when user switch from page to page. so we have resume functionality as well. but thats not robost solution.
So i've come to two solution.
1. Iframe solution
i'll give client a some.html file which he'll need to upload to their root url, then upon video chat and cobrowse we load that page in some.html's iframe and chat appear in some.html
so that work well. chat box apprear seamlessly no page reload effects came in. and as its on same domain i can access all contents of iframe.
2. Another hack solution (not implemented yet, looks good solution)
i was thinking that instead of redirecting user to new page (some.html)
i should clear all contents of current page and load same url in one iframe within the page.
i think that will work well. but i affraid that some client might be using complex js based web app. so if i remove complete body from their page they might have problems.
as much as i know i can remove all dom nodes with their events handlers as well. but is their way to clean js runtime. so all js objects will destroyed and removed from scope so no longer run.
so is there a ways to clear any page completely with all its html and associated java-script as well. means reset page to blank.
Finally i found that there is no way to reset page.. but got another way to make it done.
upon need we can redirect user to same page with query string that identity that its reload for iframe, we put small bit of another code at head which remove all dom before loading dom, css, js.. and create just one iframe of same url.
thus it allows me, have user see no change in url, user browse website as normal without any problem and my chatbox always be there in same state across all pages.
will make it live soon on tagove.com
Why don't you empty the HTML page using empty() function of jquery first,
Then remove/update the link i.e.<script src="...."></script> so that the HTML has no dependency on that javascript and that way it won't be able to Modify the DOM.
And then try to build a javascript program to remove any file in the folder which is isolated(No calling, No dependency, no connection whatsoever)
I'm trying to create a gallery that allow custom url rather than url prefix with hashtag.
For example:
http://www.myportfolio.com/gallery/3
rather than
http://www.myportfolio.com/gallery#3
so far everything is working fine, if I access from http://www.myportfolio.com/gallery I was able to go to the next and previous image with the url updated.
My main issue now is although the url is now dynamic but it still cannot be bookmarked, if I enter http://www.myportfolio.com/gallery/4 to go the 4th image it doesn't work.
Is there a Javascript approach to this or do you need a combination of PHP to redirect the url?
It is possible to use client side JavaScript to handle this, although you'll need to set up the server so that every URL (that isn't for something like an image or script) loads the bootstrap document your SPA runs on. You just need to check location.href when the page loads and then set up the content you want.
That said, doing so is a very bad idea that completely misses the point of using pushState and friends in the first place.
The two points of being able to have a normal URL are that:
Clients where the JavaScript fails still get a useful page
The content for that URL is loaded in the initial page load (so it is available faster)
If you aren't going to take advantage of that, you might as well go back to hashbangs.
I designed a website in which the whole site is contained within one page (index.php).
Within the page, <section> tags define different parts of the site (home, contact, blog etc.)
Navigation is achieved by buttons that are always visible, and when clicked use javascript to change the visibility of the sections, so that only one is shown at any time.
More specifically, this is done by using the hash in the url, and handling the hashchange event. This results in urls such as www.site.com/#home (the default if no other hash is present) and www.site.com/#contact.
I want to know if this is a good design. It works, but I get the feeling there must be a better way to achieve the same thing? To clarify, I was aiming for site that loaded all the main content once, so that there were no more page loads after the initial load, and moving between sections would be smoother.
On top of this, another problem is introduced concerning SEO. The site shows up in google, but if for example, a search query contains a term in a specific section, it still loads the default #home page when clicked, not the specific section the term was found in. How can I rectify this?
Finally, one of the sections is a blog section, which is the only section that does not load all at once, since by default it loads the latest post from a database. When a user selects a different post from a list (which in itself is loaded using AJAX), AJAX is used to fetch and display the new post, and pushState changes the history. Again, to give each post a unique url that can be referenced externally, the menu changes the url which is handled by javascript, resulting in urls such as www.site.com/?blogPost=2#blog and www.site.com/?blogPost=1#blog.
These posts aren't seen by google at all. Using the Googlebot tool shows that the crawler sees the blog section as always empty, so none of the blog posts are indexed.
What can I change?
(I don't know if this should be on the webmasters stackexchange, so sorry if its in the wrong place)
Build a normal site. Give each page a normal URL. Let Google index those URLs. If you don't have pages for Google to index, that it can't index your content.
Progressively enhance the site with JS/Ajax.
When a link is followed (or other action that, without JS, would load a new page is performed) use JavaScript to transform the current page into the target page.
Use pushState to change the URL to the URL that would have been loaded if you were not using JavaScript. (Do this instead of using the fragment identifer (#) hack).
Make sure you listen for history events so you can transform the page back when the back button is clicked.
This results in situations such as:
User arrives at /foo from Google
/foo contains all the content for the /foo page
User clicks link to /bar
JavaScript changes the content of the page to match what the user would have got from going to /bar directly and sets URL to /bar with pushState
Note that there is also the (not recommended) hashbang technique which hacks a one-page site into a form that Google can index, but which is not robust, doesn't work for any other non-JS client and is almost as much work as doing things properly.
I was searching for a script or at least a code snippet but haven't really made any progress. Anyway, I'm looking for a script that works like a simple pagination javascript but it should be accessible by linking from anywhere in the document and by calling it with the URL (e.g. on www.abc.de/default.html#thirddiv the third page of the pagination is displayed). Further, the contents should be loaded upon request (when the user clicks on the link and enters the specific page of the pagination), so that cookies, that have been set or deleted in the same document earlier can be used later without reloading the entire page. Something like that is used on Facebook for calling contents and loading them.
I've found a script on CSS Tricks called BetterBlogroll but I don't really get my mind into this. A pagination script from DynamicDrive is already working very well on the page but my problem is that there should be running three of them on the same page and as I said, the content should be loaded upon the user's request.
The script I'd need does not has to be with loads of CSS, the best way would be plain javascript and only the required CSS and HTML data. Anything else just disturbs. If anyone can help me out here, I'd be very thankful.
Check out Ryan Bates's Railscasts #174 and #175. These two are not rails specific, and explain this well.
In Google Reader, you can use a bookmarklet to "note" a page you're visiting. When you press the bookmarklet, a little Google form is displayed on top of the current page. In the form you can enter a description, etc. When you press Submit, the form submits itself without leaving the page, and then the form disappears. All in all, a very smooth experience.
I obviously tried to take a look at how it's done, but the most interesting parts are minified and unreadable. So...
Any ideas on how to implement something like this (on the browser side)? What issues are there? Existing blog posts describing this?
Aupajo has it right. I will, however, point you towards a bookmarklet framework I worked up for our site (www.iminta.com).
The bookmarklet itself reads as follows:
javascript:void((function(){
var e=document.createElement('script');
e.setAttribute('type','text/javascript');
e.setAttribute('src','http://www.iminta.com/javascripts/new_bookmarklet.js?noCache='+new%20Date().getTime());
document.body.appendChild(e)
})())
This just injects a new script into the document that includes this file:
http://www.iminta.com/javascripts/new_bookmarklet.js
It's important to note that the bookmarklet creates an iframe, positions it, and adds events to the document to allow the user to do things like hit escape (to close the window) or to scroll (so it stays visible). It also hides elements that don't play well with z-positioning (flash, for example). Finally, it facilitates communicating across to the javascript that is running within the iframe. In this way, you can have a close button in the iframe that tells the parent document to remove the iframe. This kind of cross-domain stuff is a bit hacky, but it's the only way (I've seen) to do it.
Not for the feint of heart; if you're not good at JavaScript, prepare to struggle.
At it's very basic level it will be using createElement to create the elements to insert into the page and appendChild or insertBefore to insert them into the page.
You can use a simple bookmarklet to add a <script> tag which loads an external JavaScript file that can push the necessary elements to the DOM and present a modal window to the user. The form is submitted via an AJAX request, it's processed server-side, and returns with success or a list of errors the user needs to correct.
So the bookmarklet would look like:
javascript:code-to-add-script-tag-and-init-the-script;
The external script would include:
The ability to add an element to the DOM
The ability to update innerHTML of that element to be the markup you want to display for the user
Handling for the AJAX form processing
The window effect can be achieved with CSS positioning.
As for one complete resource for this specific task, you'd be pretty lucky to find anything. But have a look at the smaller, individual parts and you'll find plenty of resources. Have a look around for information on modal windows, adding elements to the DOM, and AJAX processing.