I'm building very simple SPA 'wannabe' site /for a friend and for exercise/. The idea is simple - 3 static pages: home, portfolio, contact. I've made the links from the portfolio and contact to change the url and I just need to make some functions to change the content. So far so good, but when I refresh the page when I'm on /contact or /portfolio "page" i get error "Cannot GET /portfolio". Same happens when I try to copy and paste the link in other browser. The purpose of the site is to be able to send links and open em. Could this be achieved without server-side?
No.
You must change have a server-side, because this mode requires URL rewriting. You have to rewrite all your requests to the index of your app.
You can disable HTML5 location mode (and use hash instead).
Related
I have the following requirement for my application (Android, iOS):
When the application launches it displays a login.html page (which is part of the application). After logging in, the application's webview should be occupied with home.jsp from an external domain. When user clicks on logout button in home.jsp it has to navigate back to login page . On click of logout when we usewindow.location.href="login.html" then it tries to find the page on xxx domain.
Is there a way to detect this navigation URL and override the URL from javascript or phonegap properties in the application?
When I inspect window.location.href in an android emulator I get file:///android_asset/www/index.html
But I think Nathans idea of moving it to the server is a good one. You could also have one on the device if you really need to. (PErhaps you should ask the person specifiying the app achitecture how they would do it :) )
The answer is going to vary depending on how you've implemented the mentioned WebView where home.jsp is being displayed in. You did not provide any code or any specific information so the answer is going to be the same - somewhat vague...
If you've opened a new WebView, then you can't control it from JavaScript. You'll need to control it via Java or Objective-C code (you did not mention which environment you're developing for...).
For example, if you'll look in the your-app\android\native\src\com\your-app\your-app.java file, you'll see how the native layer loads the application's index.html file after the Worklight JavaScript framework has been loaded.
Similarly, you could re-use this approach in your own application to close and re-load login.html.
If you're in fact doing the mentioned re-direct from the comments, meaning you're re-using the current webview but replacing its content with external content, then I think it is expected that you've lost the context of the application, and when looking for login.html - it doesn't find it... because you've moved from app-context to web-context. They do not know each other.
I think you should not do this re-direct. Instead, you need to open a new WebView using a Cordova plug-in, and in this new WebView to display your external content.
In this Overlayed WebView, you can detect any urls that are clicked on and if the sign-out URL was detected, then close the WebView.
You can see parts of this in action in the Integrating server-generated pages in hybrid applications tutorial and accompanying sample project.
In the sample project, you can see the functions provided (where you can add yours) in android\nativeResources\src\com\IncludeExternalPages\IncludeExternalPages.java.
I'm getting a blank permissions page on facebook login using the Javascript SDK. It was working for the last few days and I'm not sure what I changed. I don't seem to be using my app secret anywhere (I do define the API key). Are there any suggestions for troubleshooting? The Facebook debug page unfortunately just had me add some meta tags related to open graph which didn't change anything. I've tried looking at the 100s of other questions like this but can't figure it out. I tried reverting to an older version of my code so I'm pretty sure it's something with the Facebook, but it was working.
The blank page is in the popup window and has something about permissions in the URL. My site doesn't require any permissions though.
This is the URL in the login window after signing in:
https://www.facebook.com/dialog/permissions.request?_path=permissions.request&app_id=number_I_removed&redirect_uri=http%3A%2F%2Fstatic.ak.facebook.com%2Fconnect%2Fxd_arbiter.php%3Fversion%3D11%23cb%3Df8917d1784b726%26origin%3Dhttp%253A%252F%252Fpostacle.com%252Ffd565a0d8d775e%26domain%3Dpostacle.com%26relation%3Dopener%26frame%3Df12565ffe5570f2&sdk=joey&display=popup&response_type=token%2Csigned_request&domain=postacle.com&fbconnect=1&from_login=1&client_id=number_I_removed
If I reload my page I'm signed in correctly. The app ID and Client ID are the same in the URL. Not sure if they should be but wasn't specified a client ID or means of generating one.
OK, so of course I left out a critical detail: I'm using Django. Further, I'm so reliant on Django I was serving my Facebook channelUrl in a view. Facebook's API didn't like that. After reading the story of a similarly cursed fellow, I changed my ways: URL links directly to channel.html file. No more sorrows.
I'm working on a web app which uses Backbone's HTML5 History option. In order to avoid having to code everything on the client and on the server, I'm using this method to route every request to index.html
I was wondering if there is a way to get Twitter Cards to work with this setup, as currently it can't read the page as everything is loaded in dynamically with Javascript.
I was thinking about using User Agents to detect whether it's the TwitterBot, and if it is, serving a static version of the page with the required meta-tags. Would this work?
Thanks.
Yes.
At one job we did this for all the SEO/search/facebook stuff etc.
We would sniff the user-agent, and if it was one of the following sniffers
Facebook Open Graph
Google
Bing
Twitter
Yandex
(a few others I can't remember)
we would redirect to a special page that was written to dump all the relevant data about the page for SEO purposes into a nicely formatted (but completely unstyled) page.
This allowed us to retain our google index position and proper facebook sharing even though our site was a total single-page app in backbone.
Yes, serving a specific page for Twitterbot with the right meta data markup will work.
You can test your results while developing using the card's preview tool.
https://dev.twitter.com/docs/cards/preview (with your static URL or just the tags).
how i can make my pages show like grooveshark pages
http://grooveshark.com/#!/popular
is there a tutorial or something to know how to do this way for showing page by jQuery or JavaScript?
The hash and exclamation mark in a url are called a hashbang, and are usualy used in web applications where javascript is responsible for actually loading the page. Content after the hash is never sent to the server. So for example if you have the url example.com/#!recipes/bread. In this case, the page at example.com would be fetched from the server, this could contain a piece of javascript. This script can then read from location.hash, and load the page at /recipes/bread.
Google also recognizes this URL scheme as an AJAX url, and will try to fetch the content from the server, as it would be rendered by your javascript. If you're planning to make a site using this technique, take a look at google's AJAX crawling documentation for webmasters. Also keep in mind that you should not rely on javascript being enabled, as Gawker learned the hard way.
The hashbang is being going out of use in a lot of sites, evenif javascript does the routing. This is possible because all major browsers support the history API. To do this, they make every path on the site return the same Javascript, which then looks at the actual url to load in content. When the user clicks a link, Javascript intercepts the click event, and uses the History API to push a new page onto the browser history, and then loads the new content.
I am in the process of developing an online music magazine. We have a html5/flash music player, and this forms a major part of the website. But the site also has a lot of articles and stuff. So basically, I want seamless music playback across page loads, but I also want to avoid a complete javascript application because I want all the content to be spider friendly and indexable in Google.
I use html5 history api with the hashbang (#!) fallback for loading various content within the main page on clicks. And the URLs loaded also point to pages with the content.
For example:
munimkazia.com/page1.html link in my index page munimkazia.com will load the content from page1.html and insert it. The URL will change to munimkazia.com/#!/page1.html in firefox and IE, and munimkazia.com/page1.html in chrome..
Since the href link is munimkazia.com/page1.html, the spider will follow the link and fetch the content.
I have the page set up properly at page1.html, ready for viewing. But now, I have problems.
If I decide to use ajax loads at this page, the URLs appearing on the browser location bar will not be consistent with the hashbang fallback (http://munimkazia.com/page1.html/#!/page2.html)
If I decide to redirect all clicks to the main container page at http://munimkazia.com and load page2.html, everything will work fine after this, but this page load will interrupt the music playback before it, if any.
Also, I don't want to rewrite all http://munimkazia.com/page1.html to http://munimkazia.com/#!/page1.html, because I want all the content to be present and not fetched and written by javascript for search engines spiders to read.
I am aware that Google has a spec to read the content from #! URLs, but I want the page to load with the article content for the user even if JS is disabled
Any ideas/advice/workarounds?
Edit: Those URLs are just examples to explain my point. There is no javascript code to fetch pages at munimkazia.com
Hash-bang #! URL's can be indexed by Google, that's kinda the whole point of them otherwise people would just use hash # on it's own.
I think the idea is that Google see's the #! URL and converts it into a querystring parameter, eg. example.com/#!/products/123/ipod-nano-32gb becomes example.com/?_escaped_fragment_=/products/123/ipod-nano-32gb but users still use the hash-bang URL. You program the server to response to the ?_escaped_fragment_ parameter, but JavaScript user get redirected to the proper #! URL.
Check out Google specification here http://code.google.com/web/ajaxcrawling/docs/getting-started.html
I don't think it's a good idea to use both types of URL, as you'd have two URL's being posted on blogs, Twitter etc. by users for the same page, would also be a nightmare to write the code to handle it reliably. You'd probably have to settle for hash-bangs for now until HTML5 History API is more broadly supported.