I have developed a AngularJS Application together with a Parse.com backend (only data, no business logic). They communicate over REST.
Now my problem is, that I would like to get my page indexed by google. To reach that, I somehow have to serve all my content as static pages to make sure, Google can index it.
Now I found a nice service called getseojs.com, which does nothing else than serving all contents of my website as static content.
All i had to adjust on my side was to add a Rewrite-Condition and Rule into my .htaccess file which does nothing else than forwarding all calls containing a "_escaped_fragment_=" to the getSEOjs Service.
My only problem is, that my links arent working in the static version.
The reason is quite simple:
The URL of my AngularJS application is something like www.mydomain.com/app/
Now my links look like this:
Sample Content which is working fine in normal Browsers.
The problem is, that in the static content the domain is different. It is something like:
http://getseojs.com/v2/sdfxsaa2/://www.mydomain.com:80/app/?_escaped_fragment_=/sample/content
for the same Sample Content site. When I click on a link on the static site, I get redirected to something like:
http://getseojs.com/v2/sdfxsaa2/://www.mydomain.com:80/app/?_escaped_fragment_=/sample/content#!/othercontent
instead of
http://getseojs.com/v2/sdfxsaa2/://www.mydomain.com:80/app/?_escaped_fragment_=/othercontent
Is there any way I can avoid this? Is there no other way than working with absolute URLS? But also then I got a problem, because I need the /app/ part (cause this is were my website is placed in) and between the app part and the routes I need the hashbang (#!) oder in case of googlebot the part with "?_escaped_fragment_=".
I hope someone of you can help me. I have no idea how to solve this issue.
Thanks a lot.
Greets
Marc
Related
I'm trying to build a small homepage hosted on github-pages, with a (1) title, (2) navbar and (3) a content window. I'm updating the content with ajax and use pushState/popstate for url updating and browser history. The problem is that if one refreshes the page at e.g. user.github.io/content1, the page is not found (because the html file doesn't actually exist).
I read that if one controls the server, this is usually solved by redirecting (or mod_rewrite-ing) every requested deep link to one resource and from there reconstructing the page with javascript according to the requested link. On gh-pages, this is not possible, so I thought about actually creating all the html files reflecting the url paths, but with each of them only containing the javascript code to re-generate the corresponding state (so that e.g. if I want to update my title or the links in the navbar, I don't have to manually edit all of the html files).
I have read about Jekyll, but I'm a beginner and I'd like to program everything from scratch to learn something.
Do you think this approach is a waste of time? Are there better ways to do this?
Thanks a lot, Stefan
Two cases :
Making a Single Page Applivation (SPA)
Your SPA is a javascript application that needs to send datas to/from a server that stores them, and to render the results on the client side.
In this case, your problem is not a Jekyll problem, it's a data one.
You can then have a serious look at react, angular and so on, ...
Making static web pages
Github pages is using Jekyll to generate static pages.
This way you can generate static pages with a title, a specific navbar and content with nothing more.
In terms of development and performance, it will be far more efficient.
Why it's better ?
Still assuming that you're not building an SPA.
Anytime you make a change in gh-pages (anything new like page, post, ...), your site is rebuild (post, page, include like navigation, an so on).
New page -> commit to gh-pages -> new build -> everything is OK !
I am a complete Google Analytics beginner and would appreciate a help with a basic question.
I am developing HTML, CSS and JavaScript based applications which are further uploaded into an iOS application to present your applications in a fancy way. Therefore my application is a hybrid application (half JS web site, half mobile app).
I would love to see users' activity in my app when they are browsing through it and I thought GA might work well with it - but the problem is, that the outer app doesn't provide me with any URL of my inner JS app (the inner web site's URL is file:///).
At this page (link), I found that URL is not really important, that it is the tracking code which is important. So I used a dummy URL, added the GA snippet into my application and uploaded it in iPresent. I can't see no live activity though... :/ It also says the measuring is not installed (not used at a home page).
So I am wondering - is the URL really important?
Any ideas?
Thanks!
URL (or page path) is only important if you want to report on data based on which URLs your visitors went to.
If you app doesn't use URLs at all, perhaps it fits better with the "app" model where you are sending screen name data instead of page data. You can read more about the differences between web and app views here:
https://support.google.com/analytics/answer/2649553
I found out that URL is not needed. This type of problem can be solved by using GA Measurement Tool
https://developers.google.com/analytics/devguides/collection/protocol/v1/
Validate your hit here:
https://ga-dev-tools.appspot.com/hit-builder/
Essentially, I want to create a personal website that functions like this one:
https://sublime.wbond.net/packages/Jade
Whereby it's contained within one HTML page and clicking on a nav item will only load the required information.
Looking at the javascript code I believe the developer is using Backbone.js and Handlebars.js. I think they used PHP for the backend.
There is a key functionality that I'm after that is within this site. Essentially, when you are at the aforementioned directory, and then you change to https://sublime.wbond.net/docs, there will be an AJAX request for only the HTML that's needed and then it is appended to the current page.
Having written a simple backbone app by following a tutorial, it seems it's done differently. Hosting the app using node, it will load all of the content. When you go to another directory, it still loads all the content and then backbone will append the right piece based on the URL. I can see this being useful for certain kinds of apps, but I don't want that functionality. I looked into it more and I thought about using the fetch() functionality in backbone, but I'm not too sure he's using that either.
It appears like he's doing something like Rendr by Airbnb. I can't really use that because there the documentation is not sufficient right now.
It looks like when you call a page it just gives you the HTML all ready without the need to compile it locally. Is there something I'm missing here in terms of utilizing backbone or is this just some tool he's made to handle this?
If you are not afraid spending hours in front of videos, those excellent screencasts could get you started : the guy explains how to build a Single-Page app using Backbone and Marionette, from scratch.
This web site is not using Backbone, and the solution he uses is a mixte of full html page load and JSon call, look at this links :
https://sublime.wbond.net/browse.json
https://sublime.wbond.net/search.json
https://sublime.wbond.net/docs.html
https://sublime.wbond.net/news.html
https://sublime.wbond.net/stats.json
The simplest way to have the same behavior as wbond.net will be to change the way you render the page on the backend. You need to check if request is XHR and render only content, without layout. On the frontend you need to bind click event to each links which will send AJAX request on binded URL and put whole response in the page content area (jQuery's $.get() method).
I'm working on a web app which uses Backbone's HTML5 History option. In order to avoid having to code everything on the client and on the server, I'm using this method to route every request to index.html
I was wondering if there is a way to get Twitter Cards to work with this setup, as currently it can't read the page as everything is loaded in dynamically with Javascript.
I was thinking about using User Agents to detect whether it's the TwitterBot, and if it is, serving a static version of the page with the required meta-tags. Would this work?
Thanks.
Yes.
At one job we did this for all the SEO/search/facebook stuff etc.
We would sniff the user-agent, and if it was one of the following sniffers
Facebook Open Graph
Google
Bing
Twitter
Yandex
(a few others I can't remember)
we would redirect to a special page that was written to dump all the relevant data about the page for SEO purposes into a nicely formatted (but completely unstyled) page.
This allowed us to retain our google index position and proper facebook sharing even though our site was a total single-page app in backbone.
Yes, serving a specific page for Twitterbot with the right meta data markup will work.
You can test your results while developing using the card's preview tool.
https://dev.twitter.com/docs/cards/preview (with your static URL or just the tags).
I don't know if that question worded quite right, but here is my situation:
We have some older Flash and Flex files that someone before me lost the original files on. Now they want to add event tracking when some links inside the SWFs are clicked that use the old navigateToURL type ActionScript. Does anyone know if you can intercept that action with JavaScript so I can add the tracking they want before it redirects the page?
Thank you, I am doubtful of it but my knowledge of ActionScript/Flash is very rusty so I thought I would ask.
Don't know if you can intercept them - but you can rewrite the strings int he compiled swf.
Try a tool like this one:
http://buraks.com/uae/
It allows you to rewrite the strings that are used as part of the navigateToURL action.
If those navigate URLs are not cross-domain, you could do this:
Host your Flex app in an iframe
In your hosting page, poll that iframe to see if its URL changes
Of course, I'm assuming the navigation URLs are not in your domain, or tracking hits to those URLs would already be an easy problem to solve.