I started looking at HTML5 new history API
However, I have one question. How can one handles the page refresh?
For example a user clicks a link, which is handled by a js function, which
asynchronously loads the content of the page
changes the URL with history.pushState()
The the user refreshes the page, but the URL of course does not exist on the server
How do you handle situations like this? With the hash solution there was no issue there
Thanks
This does require the server-side support. The ideal use of .pushState is to allow URLs from JavaScript applications to be handled intelligently by the server.
This could be either by re-serving the same JavaScript application, and letting it inspect window.location, or by rendering the content at that URL on the server as you would for a normal web application. The specifics obviously vary depending on what you're doing.
You need to perform server side redirection for copy and pasted fake URLs
It all depends on what server side technology you're using. There is no JavaScript way but writing a crazy function in your 404 page that redirect user based on incoming URL that is not a good solution.
The the user refreshes the page, but the URL of course does not exist
on the server
What do you mean by this? I have a feeling your assumption is wrong.
Actually yes, the point is it is the developer who should provide (serverside or clientside) implementation of url-to-pagestate correspondence.
If, once again, I've get the question right.
If you are using ASP.NET MVC on your server, you can add the following to your RegisterRoutes function:
routes.MapRoute(
name: "Default",
url: "{*url}",
defaults: new { controller = "Home", action = "Index" }
);
This will send all requests to Index action of your HomeController and your UI will handle the actual route.
For people looking for a JS-only solution, use the popstate event on window. Please see MDN - popstate for full details, but essentially this event is passed the state object you gave to pushState or replaceState. You can access this object like event.state and use it's data to determine what to do, ie show an AJAX portion of a page.
I'm not sure if this was unavailable in 2011 but this is available in all modern browsers (IE10+), now.
Related
I've been teaching myself react-router, and now I'm wondering which method should be used for going to another page.
According to this post (Programmatically navigate using react router), you can go to another page by this.props.history.push('/some/path').
Honestly, however, I'm not quite sure about the differences between window.location.href and history.pushState.
As far as I understand, window.location.href = "/blah/blah"; leads you to anther page by making a new HTTP call, which refreshes the browser.
On the other hand, what history.pushState (and this.props.history.push('/some/path')) does is to push a state. This, apprently, changes HTTP referrer and consequently updates XMLHttpRequest.
Here is an excerpt from mozila's documentation...
Using history.pushState() changes the referrer that gets used in the HTTP header for XMLHttpRequest objects created after you change the state.
To me, it sounds like both methods make a new HTTP call. If so, what are the differences?
Any advice will be appreciated.
PS
I thought that developers would need to consider whether it's necessary to get data from the server, before deciding how to go to another page.
If you need to retrieve data from the server, window.location.href would
be fine, since you'll make a new HTTP call. However, if you are using <HashRouter>, or you want to avoid refreshing you page for the sake of speed, what would be a good approach?
This question led me to make this post.
history.pushstate does not make a new HTTP call (from mozilla doc quoted by you):
Note that the browser won't attempt to load this URL after a call to pushState(), but it might attempt to load the URL later, for instance after the user restarts the browser.
window.location.href = "url" navigates the browser to new location, so it does make a new http request. Note: exception is the case when you specify new url as hash fragment. Then window is just scrolled to corresponding anchor
You can check both running them from devTools console having Network tab open. Be sure to enable "preserve log" option. Network tab lists all new http requests.
You can test this in the Network of your console. When using history with react router, no refresh is made within the app (no http request). You'd use this for a better UX flow and persistence of state. To my understanding, we're essentially modifying the url (without the http request) and react router uses pattern matching to render components according to this programmatic change.
I use browserHistory in react-router v3. There is no refresh and the application state is maintained throughout.
To redirect, all I do is just browserHistory.push('\componentPath'), componentPath which is mapped in the routes config of the application.
So far had no issues , works like charm.
Hope this help.
I am working on a pure HTML website, all pages are HTML with no relation to any server side code.
Basically every request to the server is made using AJAX, I send data from forms, I process this data in Handlers, then I return a JSON string that will be processed back on the client side.
Let's say the page is loaded with parameters in the URL, something like question.html?id=1. Earlier, I used to read this query string on Page Load method, then read data from the database and so on...
Now, since its pure HTML pages, I'm trying to think of an approach that will allow me to do the same, I have an idea but its 99% a bad idea.
The idea is to read URL parameters using JS (after the page has loaded), and then make an AJAX request, and then fetch the data and show them on the page. I know that instead of having 1 request to the server (Web Forms), we are now having 2 Requests, the first request to get the page, and the second request is the AJAX request. And of course this has lots of delays, since the page will be loaded at the beginning without the actual data that I need inside it.
Is my goal impossible or there's a mature approach out there?
Is my goal impossible or there's a mature approach out there?
Lately there are a good handful of JavaScript frameworks designed around this very concept ("single page app") of having a page load up without any data pre-loaded in it, and accessing all of the data over AJAX. Some examples of such frameworks are AngularJS, Backbone.js, Ember.js, and Knockout. So no, this is not at all impossible. I recommend learning about these frameworks and others to find one that seems right for the site you are making.
The idea is to read URL parameters using JS (after the page has loaded), and then make an AJAX request, and then fetch the data and show them on the page.
This sounds like a fine idea.
Here is an example of how you can use JavaScript to extract the query parameters from the current page's URL.
I know that instead of having 1 request to the server (Web Forms), we are now having 2 Requests, the first request to get the page, and the second request is the AJAX request. And of course this has lots of delays, since the page will be loaded at the beginning without the actual data that I need inside it.
Here is why you should not worry about this:
A user's browser will generally cache the HTML file and associated JavaScript files, so the second time they visit your site, the browser will send requests to check whether the files have been modified. If not, the server will send back a short message simply saying that they have not been modified and the files will not need to be transmitted again.
The AJAX response will only contain the data that the page needs and none of the markup. So retrieving a page generated on the server would involve more data transfer than an approach that combines a cacheable .html file and an AJAX request.
So the total load time should be less even if you make two requests instead of one. If you are concerned that the user will see a page with no content while the AJAX data is loading, you can (a) have the page be completely blank while the data is loading (as long as it's not too slow, this should not be a problem), or (b) Throw up a splash screen to tell the user that the page is loading. Again, users should generally not have a problem with a small amount of load time at the beginning if the page is speedy after that.
I think you are overthinking it. I'd bet that the combined two calls that you are worried about are going to run in roughly the same amount of time as the single webforms page_load would if you coded otherwise - only difference now being that the initial page load is going to be really fast (because you are only loading a lightweight, html/css/images page with no slowdown for running any server code.
Common solution would be to then have a 'spinner' or some sort (an animated GIF) that gives the user an visual indication that the page isn't done loading while your ajax calls wait to complete.
Watch a typical page load done from almost any major website in any language, you are going to see many, many requests that make up a single page being loaded, wether it be pulling css/images from a CDN, js from a CDN, loading google analytics, advertisements from ad networks etc. Trying to get 100% of your page to load in a single call is not really a goal you should be worried about.
I don't think the 2-requests is a "bad idea" at all. In fact there is no other solution if you want to use only static HTML + AJAX (that is the moderm approach to web development since this allow to reuse AJAX request for other non-HTML clients like Android or iOS native apps). Also performance is very relative. If your client can cache the first static HTML it will be much faster compared to server-generated approach even if two requests are needed. Just use a network profiler to convince yourself.
What you can do if you don't want the user to notice any lag in the GUI is to use a generic script that shows a popup hiding/blocking all the full window (maybe with a "please wait") until the second request with the AJAX is received and a "data-received" (or similar) event is triggered in the AJAX callback.
EDIT:
I think that probably what you need is to convert your website into a webapp using a manifest to list "cacheable" static content. Then query your server only for dynamic (AJAX) data:
http://diveintohtml5.info/offline.html
(IE 10+ also support Webapp manifests)
Moderm browsers will read the manifest to know whether they need to reload static content or not. Using a webapp manifest will also allow to integrate your web site within the OS. For example, on Android it will be listed in the recent-task list (otherwise only your browser, not your app is shown) and the user can add a shorcut to the desktop.
So, you have static HTMLs and user server side code only in handlers? Why you can't have one ASP .Net page (generated on server side) to load initial data and all other data will be processed using AJAX requests?
if its possible to use any backed logic to determine what to load on server side, that will be easy to get the data.
say for example if you to load json a int he page cc.php by calling the page cc.php?json=a, you can determine from the PHP code to put a json into the page it self and use as object in your HTML page
if you are using query string to read and determine, what to load you have to make two calls.
The primary thing you appear to want is what is known as a router.
Since you seem to want to keep things fairly bare metal, the traditional answer would be Backbone.js. If you want even faster and leaner then the optimised Backbone fork ExoSkeleton might be just the ticket but it doesn't have the following that Backbone proper has. Certainly better than cooking your own thing.
There are some fine frameworks around, like Ember and Angular which have large user bases. I've been using Ember recently for a fairly complex application as it has a very sophisticated router, but based on my experiences I'm more aligned with the architecture available today in React/Flux (not just React but the architectural pattern of Flux).
React/Flux with one of the add-on router components will take you very far (Facebook/Instrgram) and in my view offers a superior architecture for web applications than traditional MVC; it is currently the fastest framework for updating the DOM and also allows isomorphic applications (run on both client and server). This represents the so called "holy grail" of web apps as it sends the initial rendered page from the server and avoids any delays due to framework loading, subsequent interactions then use ajax.
Above all, checkout some of the frameworks and find what works best for you. You may find some value comparing framework implementations over at TodoMVC but in my view the Todo app is far too simple and contrived to really show how the different frameworks shine.
My own evolution has been jQuery -> Backbone -> Backbone + Marionette -> Ember -> React/Flux so don't expect to get a good handle on what matters most to you until you have used a few frameworks in anger.
The main issue is from a UX / UI point of view.
Once you get your data from the server (in Ajax) after the page has been loaded - you'll get a "flickering" behavior, once the data is injected into the page.
You can solve this by presenting the page only after the data has arrived, OR use a pre-loader of some kind - to let the user know that the page is still getting its data, but then you'll have a performance issue as you already mentioned.
The ideal solution in this case is to get the "basic" data that the page needs (on the first request to the server), and manipulate it via the client - thus ease-in the "flickering" behavior.
It's the consideration between performance and "flickering" / pre-loading indication.
The most popular library for this SPA (Single Page Application) page - is angularJS
If I understand your inquiry correctly. You might want to look more about:
1) window.location.hash
Instead of using the "?", you can make use of the "#" to manipulate your page based on query string.
Reference: How to change the querystring on the same page without postback
2) hashchange event
This event fires whenever there's a changed in the fragment/hash("#") of the url. Also, you might want to track the hash to compare between the previous hash value and the current hash value.
e.g.
$(window).on('hashchange', function () {
//your manipulation for query string goes here...
prevHash = location.hash;
});
var prevHash = location.hash; //For tracking the previous hash.
Reference: On - window.location.hash - Change?
3) For client-side entry-point or similar to server-side PageLoad, you may make use of this,
e.g.
/* Appends a method - to be called after the page(from server) has been loaded. */
function addLoadEvent(func) {
var oldonload = window.onload;
if (typeof window.onload != 'function') {
window.onload = func;
} else {
window.onload = function () {
if (oldonload) {
oldonload();
}
func();
}
}
}
function YourPage_PageLoad()
{
//your code goes here...
}
//Client entry-point
addLoadEvent(YourPage_PageLoad);
Since you're doing pure ajax, the benefit of this technique is you would be able to easily handle the previous/next button click events from the browser and present the proper data/page to the user.
I would prefer AngularJS. This will be a good technology and you can do pagination with one HTML. So i think this will be good framework for you as your using static content.
In AngularJS MVC concept is there please read the AngularJS Tutorial. So this framework will be worth for your new project. Happy coding
I have a AugularJS controller that have something like the following when initialized
$scope.urlChanged = false;
and the url is like /firstpage/test
There is a button in the page and when user clicks the button, the following is executed
$scope.urlChanged = true;
window.location = '/secondpage/test';
The page goes to /secondpage/test as expected. When clicking the browser back button, the page goes back to /firstpage/test. But the $scope.urlChanged is false, not the final value true. Is this expected in Angular? How do I make $scope.urlChanged = true when going back?
Scope variables are not saved when you navigate. In fact not even services will retain their values/state. When you navigate with browser back you actually request a whole new angular app.
This is not angular's fault. That is how the browser is expected to handle a new request. The way to persist data in this case is saving any data in something that will persists between requests.
As i see it you have three(ish) options:
Save state in cookies: Well supported by almost all browsers but take caution to handle them as clientside cookies or you won't be able to save data on a page you did not submit (excatly your problem with navigate back in browser).
Save server-side. This has the same problems as the server side cookies. You need to post data to the server for it to persist - which you could do with ajax calls and 'auto-save' with a timeout function. You also need a way of tracking who you are so you can get the correct data from the server - which is often done with cookies, but can be done with querystring parameters but also with (basic) authentication.
LocalStorage. This is my favorite option and is pretty well supported, if you don't need to support legacy IE browsers. There are good frameworks designed for angular that makes it easy to use - and some even have fallback to cookies if not supported.
Check out this link for LocalStorage support:
https://github.com/grevory/angular-local-storage
On change of view,new controller comes into picture and the previous view's controller's instance gets finished. Also , as every controller has its private scope which gets destroyed once view is changed to avoid confusion.
I have a simple search application in which I'm trying to use AJAX in order to navigate to different responses.
My problem is pretty straight forward and I'm sure that there is a simple solution to it but I am simply not able to find it.
I'm sending a new request this way:
xhrObj.open('post', 'search.do', false);
xhrObj.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
xhrObj.send('id=1');
I would like that, after this request call and the necessary business logic, the browser change the view to the new page specified in the back-end.
Instead of this, I'm receiving the page HTML code in the response but the navigation to this page never takes place.
I was under the impression that if the async parameter is set to false the flow would behave the way I was expecting.
How can I make my JS script to jump it's navigation to the a new page after an AJAX call. Please have in mind that I cannot use location.href to jump to the new page because this new page is a .JSP build on the server.
Maybe you should request for a url from the search.do then do this.
xhrObj.onreadystatechange=function() {
if (xhrObj.readyState==4 && xhrObj.status==200)
{
window.location=xhrObj.responseText;
}
}
hope this helps.
If you want to have something similar what guthub has, you need HTML5 History API. Without HTML5 History API you can't manipulate URL (with help JS) without making any redirection.
Generally you could use hash in url (domain/#hash/blabla/bla) but it's not good also.
I'm making a website that tend to handle all the request in one page (Ajax).
so i thought that I could trap every user's click on a link and check IF it's on my website i do something on JavaScript like an ajax request for example, ELSE it would open the link like usual!
doing a watch on window.location did not work!
and moreover I don't know if there is anyway to get the url part that is after the # sign.
Note: both GMail, and Facebook does that I guess!, they use something like this:
http://mail.google.com/mail/#inbox
http://www.facebook.com/home.php#/inbox/?ref=mb
Kindly Consider: that I love to use jQuery in my projects, so any solution using it is preferred.
Any ideas?
Here is another good read: Restoring Conventional Browser Navigation to AJAX Applications
Excerpt from the article:
Many developers have adopted AJAX as a
way to develop rich web applications
that are almost as interactive and
responsive as desktop applications.
AJAX works by dividing the web UI into
different segments. A user can perform
an operation on one segment and then
start working on other segments
without waiting for the first
operation to finish.
But AJAX has a major disadvantage; it
breaks standard browser behavior, such
as Back, Forward, and bookmarking
support. Rather than forcing users to
adapt to AJAX's shortcomings,
developers should make their AJAX
applications comply with the
traditional web interaction
style,.......
The fragment part of the URL is used to enable navigation history (back and forward buttons) on AJAX-enabled websites. If you want to "trap" clicks on links, since you're using jQuery anyway, you could just do that:
$('a').click(function()
{
var item = $(this);
var url = item.attr('href');
// your logic here
});
If you use fragments (window.location.hash) in constellation with AJAX, note that IE6 submits the fragment part of the url in AJAX requests which can lead to very hard-to-debug bugs, so be aware of that.
There's a hashchange event dispatched on the window for most recent browsers.
Simple:
if(window.location.hash) {
// Fragment exists
} else {
// Fragment doesn't exist
}
Link:
How can you check for a #hash in a URL using JavaScript?
See #Pekka 's link to How can you check for a #hash in a URL using JavaScript? to look at the hash. Just put that function in the callback to window.setInterval()