Backbone routing/history issue with Jekyll static pages - javascript

I'm building a simple marketing website with Jekyll, and using Backbone's routing and history behind the scenes to handle navigation. Each page of my site is its own HTML file, and my strategy is to preventDefault() on links between pages, fire off a jQuery.get() to grab the new HTML, and replace my div.content with the information from the new page.
I know this setup is a little out of the ordinary, but I have my reasons: a single-page structure is preferable because I want precise control over page transitions, and I want to avoid requesting my webfonts each time the user navigates to a new page. Keeping the HTML files static and separate is also a win for search engines.
Here's the issue: everything works fine when I start from my root URL, but when I begin at a different page, e.g. mydomain.com/page1, the history breaks. During initialization, my Router attempts to route me to the page I'm already on, resulting in a 404: Could not GET mydomain.com/page1/page1. I can prevent this with a hacky isFirstLoad boolean, but that obviously sucks, and it doesn't breaks when I start clicking around and use the back button to return to /page1.
I recognize that one solution is to write some server-side logic that serves my index.html regardless of what URL is hit. I'm not sure how to do this, however, particularly for a local environment. Is it about PHP or .htaccess? Is this even what I have to do? Am I going about this totally the wrong way?
Thanks!

Yes, one solution would be to serve every request with index.html. But that has a big downside: your site is no longer accessible by search engines. To keep the SEO benefit of having a static site, I'd suggest you shy away from that option.
I think the optimal solution is already provided by Backbone. From their documentation:
If the server has already rendered the entire page, and you don't want the initial route to trigger when starting History, pass silent: true.
So, first make sure that your Router is configured properly and all the routes match up with your static pages, and instantiate the router.
Then, start the History like this:
Backbone.history.start({ pushState: true, silent: true, root: '/' });
Push State will help keep the URLs friendly. The Silent flag tells Backbone that your static server served the page already, and it's just loading in after the fact (what you want). And the Root configuration ensures that Backbone knows what the true root of your site is (so you don't get the page1/page1 nonsense.
In my experience, getting routing set up properly can be a little fickle...so best of luck!

Related

Force a route cleanly with javascript

I'm working with a legacy app's UI and the path that links to this app is a default:
something/fldr
Whenever that page loads it forces a fldr/landing.asp page. We want to get it to go to other.asp instead of landing.
My approach for this is to use:
if (document.readyState === "interactive") {
if(location.href == 'https://www.something.com/fldr'){
location.href="https://www.something.com/other.asp";
}
}
Doing this causes a page stutter, where the landing.asp loads, shows for like 2 seconds and then refreshes to the correct page.
Is there a standard method for doing something like this in JS or jQuery? I feel like there is a way to make the page hang up until the if statements executes rather than try to load the wrong page. But I can't for the life of me remember what it is. I've handled this on the back end by forcing the correct page to return in the API but I still feel like this is something that can be resolved with only JS.
Note: The route names are made up since this is a stripped down problem of a legacy app.
JavaScript (when running in a browser) is a client-side technology.
That means it cannot run without the page partially loading after the page has been served and sent to the user's browser (client). The browser begins loading resources and parsing scripts and code, and your script will execute in the order it is parsed. This is, in fact, the delay you're experiencing.
While you may possibly tweak this to make the location.href change
execute in some earlier part of this process, there is no way to avoid
a partial page load prior to the client-side redirect you have
implemented.
Essentially, there is a better way to do this, one which will reduce the redirect delay to be imperceptible to a user.
Making this change at the web-server level is the ideal solution; however, first consider, is that even needed?
First, before implementing a redirect, I would suggest to look in the IIS settings and see if there is a default document set to fldr/landing.asp;
You can then just change that setting to make the default document to what you need.
Here's an example for IIS how to do this.
If there is not a default document or if there is some other code or application logic that is forcing landing.asp to load, then you would set up a 301 Permanent Redirect for that URL on the web server.
Here are IIS docs on setting this up.
IF for some reason the above options are unavailable to you (don't have access to web server, etc.), then the best you can do is ensure that script is the first thing in the page before any other scripts, stylesheets, etc., are loaded.
Another hacky thing that might work is just replacing the entire content of landing.asp with other.asp and call it a day :)
That is a last resort of course, and hopefully you can just change the default document and that will handle it.

Why is Angular Universal necessary?

So the obvious answer is that its necessary because it serves routed paths from the server, so that we don't get 404s.
However solutions like angular-cli-ghpages solves this by adding a script to the app that parses parameters returned in a 404 that will then reroute the app to the correct state.
So just curious are there any drawbacks to this and why would this not be used in general instead of solutions like Angular Universal or Rendertron?
For example this is what spa-github-pages says:
A quick SEO note - while it's never good to have a 404 response, it appears based on Search Engine Land's testing that Google's crawler will treat the JavaScript window.location redirect in the 404.html file the same as a 301 redirect for its indexing. From my testing I can confirm that Google will index all pages without issue, the only caveat is that the redirect query is what Google indexes as the url. For example, the url example.tld/about will get indexed as example.tld/?p=/about. When the user clicks on the search result, the url will change back to example.tld/about once the site loads.
Because of two main things:
First page load speed;
SEO
Robots do not run javascript, so they parse what the get from server and than the Universal comes around.
Even using --aot builded app served by ghpages with a 404 page that is a clone from the index, the client/robot still needs to get the first files, parse them and finally mount the final view. Gh-pages do not serve the final html state.

AngularJS - SEO - S3 Static Pages

My application uses AngularJS for frontend and .NET for the backend.
In my application I have a list view. On clicking each list item, It will fetch a pre rendered HTML page from S3.
I am using angular state.
app.js
...
state('staticpage', {
url: "/staticpage",
templateUrl: function (){
return 'http://xxxxxxx.cloudfront.net/staticpage/staticpage1.html';
},
controller: 'StaticPageCtrl',
title: 'Static Page'
})
StaticPage1.html
<div>
Hello static world 1!
<div>
How do I do SEO here?
Do I really need to do HTML snapshot using PanthomJS or so.
Yes PhantomJS would do the trick or you can use prerender.io with that service you can just use their open source renderer and have your own server.
Another way is to use _escaped_fragment_ meta tag
I hope this helps, if you have any questions add comments and I will update my answer.
Do you know that google renders html pages and executes javascript code in the page and does not need any pre-rendering anymore?
https://webmasters.googleblog.com/2014/05/understanding-web-pages-better.html
And take a look at these :
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
http://wijmo.com/blog/how-to-improve-seo-in-angularjs-applications/
My project front-end also has biult on top of Angular and I decieded to solve SEO issue like this:
I've created an endpiont for all search engines (SE) where all the requests go with _escaped_fragment_ parameter;
I parse a HTTP Request for _escaped_fragment_ GET parameter;
I make cURL request with parsed category and article parameters and get the article content;
Then I render a simpliest (and seo friendly) template for SE with the article content or throw a 404 Not Found Exception if article does not exists;
In total: I do not need to prerender some html pages or use prrender.io, have a nice user interface for my users and Search Engines index my pages very well.
P.S. Do not forget to generate sitemap.xml and include there all urls (with _escaped_fragment_) wich you want to be indexed.
P.P.S. Unfortunately my project's back-end has built on top of php and can not show you suitable example for you. But if you want more explanations do not hesitate to ask.
Firstly you can not assume anything.
Google does say that there bots can very well understand javascript application but that is not true for all scenarios.
Start from using crawl as google feature from the webmaster for your link and see if page is rendered properly. If yes, then you need not read further.
In case, you see just your skeleton HTML, this is because google bot assumes page load complete before it actually completes. To fix this you need an environment where you can recognize that a request is from a bot and you need to return it a prerendered page.
To create such environment, you need to make some changes in code.
Follow the instructions Setting up SEO with Angularjs and Phantomjs
or alternatively just write code in any server side language like PHP to generate prerendered HTML pages of your application.
(Phantomjs is not mandatory)
Create a redirect rule in your server config which detects the bot and redirects the bot to prerendered plain html files (Only thing you need to make sure is that the content of the page you return should match with the actual page content else bots might not consider the content authentic).
It is to be noted that you also need to consider how will you make entries to sitemap.xml dynamically when you have to add pages to your application in future.
In case you are not looking for such overhead and you are lacking time, you can surely follow a managed service like prerender.
Eventually bots will get matured and they would understand your application and you will say goodbye to your SEO proxy infrastructure. This is just for time being.
At this point in time, the question really becomes somewhat subjective, at least with Google -- it really depends on your specific site, like how quickly your pages render, how much content renders after the DOM loads, etc. Certainly (as #birju-shaw mentions) if Google can't read your page at all, you know you need to do something else.
Google has officially deprecated the _escaped_fragment_ approach as of October 14, 2015, but that doesn't mean you might not want to still pre-render.
YMMV on trusting Google (and other crawlers) for reasons stated here, so the only definitive way to find out which is best in your scenario would be to test it out. There could be other reasons you may want to pre-render, but since you mentioned SEO specifically, I'll leave it at that.
If you have a server-side templating system (php, python, etc.) you can implement a solution like prerender.io
If you only have AngularJS-only files hosted on a static server (e.g. amazon s3) => Have a look at the answer in the following post : AngularJS SEO for static webpages (S3 CDN)
yes you need to prerender the page for the bots, prrender.io
can be used and your page must have the
meta tag
<meta name="fragment" content="!">

replicate google maps URL behaviour with javascript? url+"/#foo"

google map does this thing where if I browse to, say, Australia, the URL changes to
https://www.google.com/maps/#-28.0345854,135.1500838,4z
I'm interested in doing something like this on my web application. So far I have this:
var baseurl = window.location.href.split("/#")[0]
window.history.replaceState( {} , 'foo', baseurl+'/#foo' );
which works just fine for adding "/#foo" to the url
My problem is that, after adding /#foo, the URL doesn't work, it 404es.
I'm not interested in modifying the brower's history, that's why I use replaceState instead of pushState.
anyway, is there a way to do this with js? or do I need server-side code to serve the appropriate page?
thankyou
You "need server-side code to serve the appropriate page". an # character is still part of the URL and therefore needs to be handled by the server. If you want to handle the this kind of situation client side only then what you want is to use # instead. anything after a hash is handle client side and does not trigger a new page to load from the server.
Several libraries use this to replicate routing in a single page HTML only app. For example:
Backbone.js Router
jQuery-Router
jquerymobile-router
Ember.Router
And many more.

History.js - sharing link of a AJAX loaded page

I have the following function that activates when I click on some links:
function showPage(page) {
var History = window.History;
History.pushState(null,null,page);
$("#post-content").load(page + ".php");
}
The content of the page updates, the URL changes. However I know I'm surely doing something wrong. For example when I refresh the page, it gives me the Page Not Found error, plus the link of the new page can't be shared, just because of the same reason.
Is there any way to resolve this?
It sounds like you're not routing your dynamic URLs to your main app. Unless page refers to a physical file on your server, you need to be doing some URL rewriting server-side if you want those URLs to work for anything other than simply being placeholders in your browser history. If you don't want to mess with the server side, you'll need to use another strategy, like hacking the URL with hashes. That way the server is still always serving your main app page, and then the app page reads the URL add-on stuff to decide what needs to be rendered dynamically.
You need to stop depending on JavaScript to build the pages.
The server has to be able to construct them itself.
You can then progressively enhance with JavaScript (pushState + Ajax) to transform the previous page into the destination page without reloading all the shared content.
Your problem is that you've done the "enhance" bit before building the foundations.

Categories

Resources