I have a web app that is completely contolled with javascript. This means there is only one request that renders the full page, everything else is json'ed to be rendered.
Facebook share uses opengraph tags in the head to control what text and image to display in the sharing, but my application is a single page, single header, so that I could use only one image for all links on the app.
We are using hashbangs (#!) to control where the user is within the application, but we also have a url rewriter that, when the user hits a URL like
http://domain/action/id
they get sent to (through a redirect header)
http://domain/#!/action/id
So, given this scenario, I want to know if there is a way to share my urls on facebook, and tell facebook to get a different image for each of my URLs.
When facebook-share gets the page, it does not process the javascript in it - but tries to search for the opengraph tags in it. so basically there is no way to share your url's on facebook and have a different picture unless you have your opengraph tags loaded in the response - without javascript. Think of what would you do if you had no javascript at all..
So what you would have to do is to load the opengraph tags in the page before you use javascript.
Related
I have a URL and I need the content inside this URL. Actually, I wish to embed the content present inside this URL. What all are the possible ways to do this without any cors issues??
I have already tried using iframes and other tricks to avoid cors issues. But still, I am not able to embed many websites in an iframe. My main aim is not to embed a website in an iframe but to embed the content inside the website URL.
I will be more than happy if there is an API which returns me back the content in a URL website.
If there is not, then how can I create such an API?? And how do I get it hosted??
As a solution to this, I created a flask app which will return me the whole website so that I can embed it in my iframe. But then it is like a proxy server and I cannot host it on Heroku because Heroku does not allow public proxy server.
Please note embedded the whole site or just the content is of similar use to me. It is just that when embedding the site, it will be interactive and in the other case, it will not. I am good with both, embedding just the content or the whole website.
Now, I am thinking to make a web crawler which will crawl the web, cache websites and store them in a database. I am thinking to host this on firebase. I have no idea about the capabilities of the firebase. I just know that it provides us with a NoSQL database and some cloud functions.
The cloud functions part interests me.
I don't know if the following is correct/feasible.
I will send the URL(whose content I need) to firebase. It will then fetch the whole website and store it in the database(so, that next time I request the same URL, it does not need to recache the whole site again) and then return me this cached content.
If there is other better way to do this, please let me know.
I need the content for my chrome extension https://github.com/Shaikh-Ubaid/InSyd
For code, please refer to this GitHub repo.
To get an idea of why I want to get content/embed content, I am attaching a gif showing the working of the extension.
<img src="https://raw.githubusercontent.com/Shaikh-Ubaid/InSyd/master/Popup-demo.gif" alt="gif" />
I'm building very simple SPA 'wannabe' site /for a friend and for exercise/. The idea is simple - 3 static pages: home, portfolio, contact. I've made the links from the portfolio and contact to change the url and I just need to make some functions to change the content. So far so good, but when I refresh the page when I'm on /contact or /portfolio "page" i get error "Cannot GET /portfolio". Same happens when I try to copy and paste the link in other browser. The purpose of the site is to be able to send links and open em. Could this be achieved without server-side?
No.
You must change have a server-side, because this mode requires URL rewriting. You have to rewrite all your requests to the index of your app.
You can disable HTML5 location mode (and use hash instead).
I'm working on a web app which uses Backbone's HTML5 History option. In order to avoid having to code everything on the client and on the server, I'm using this method to route every request to index.html
I was wondering if there is a way to get Twitter Cards to work with this setup, as currently it can't read the page as everything is loaded in dynamically with Javascript.
I was thinking about using User Agents to detect whether it's the TwitterBot, and if it is, serving a static version of the page with the required meta-tags. Would this work?
Thanks.
Yes.
At one job we did this for all the SEO/search/facebook stuff etc.
We would sniff the user-agent, and if it was one of the following sniffers
Facebook Open Graph
Google
Bing
Twitter
Yandex
(a few others I can't remember)
we would redirect to a special page that was written to dump all the relevant data about the page for SEO purposes into a nicely formatted (but completely unstyled) page.
This allowed us to retain our google index position and proper facebook sharing even though our site was a total single-page app in backbone.
Yes, serving a specific page for Twitterbot with the right meta data markup will work.
You can test your results while developing using the card's preview tool.
https://dev.twitter.com/docs/cards/preview (with your static URL or just the tags).
how i can make my pages show like grooveshark pages
http://grooveshark.com/#!/popular
is there a tutorial or something to know how to do this way for showing page by jQuery or JavaScript?
The hash and exclamation mark in a url are called a hashbang, and are usualy used in web applications where javascript is responsible for actually loading the page. Content after the hash is never sent to the server. So for example if you have the url example.com/#!recipes/bread. In this case, the page at example.com would be fetched from the server, this could contain a piece of javascript. This script can then read from location.hash, and load the page at /recipes/bread.
Google also recognizes this URL scheme as an AJAX url, and will try to fetch the content from the server, as it would be rendered by your javascript. If you're planning to make a site using this technique, take a look at google's AJAX crawling documentation for webmasters. Also keep in mind that you should not rely on javascript being enabled, as Gawker learned the hard way.
The hashbang is being going out of use in a lot of sites, evenif javascript does the routing. This is possible because all major browsers support the history API. To do this, they make every path on the site return the same Javascript, which then looks at the actual url to load in content. When the user clicks a link, Javascript intercepts the click event, and uses the History API to push a new page onto the browser history, and then loads the new content.
Is its possible to have a javascript file that is aware of two different HTML files? And how would I do this?
I would like to be able to have two pages. index.html and pictures.html. I have an index.js that changes the display properties of index.html (it puts data based on people into tables and makes it look nice). I would like this current index.js file also to be able to edit the pictures.html file and change information there. index.html would link to pictures.html to display pictures of a person (based on the persons name I have them saved smith1.jpg, smith2.jpg, reagan2.jpg, ect). Is there anyway that this javascript file could get DOM elements based on their id or class of the second file (pictures.html) even though it "lives in" index.html? When i say lives in it is called at the top of the index.html page.
thanks
A script can access elements on another page if it was loaded in some way of connection.
For example, if you make a popup using var popup = window.open(), the return value will contain a reference to the opened popup and this allows access to elements within the popup. E.g. popup.document.getElementById('something'). Pages loaded within frames, iframes and such have similar ways of access.
So yes, if your page loads the second page its script can work there as well. I suggest avoiding this beyond opening and closing popups from a script though; a script should stay inside the box of its page and if it needs to do larger operations on another page, that usually means that you need to change your code architecture a bit.
You'll need to explore server-side programming to accomplish your goal.
http://en.wikipedia.org/wiki/Server-side_scripting
...Or you could write a client-side application in which "pages" are separate views of one actual page or are generated from backing data structures. If you want persistance of what is created/edited, you'll still need server-side programming.
You can use the html5 (group of technologies) postmessage api as well.. This allows you to send messages to another page, and in that page you define an event handler that knows how to handle the message.
This also works across domains.
Here is a blog with an example I just randomly found via google:
http://robertnyman.com/2010/03/18/postmessage-in-html5-to-send-messages-between-windows-and-iframes/
Not possible on the client side if editing the actual HTML file is your goal. If getting pictures to show up depending on stuff a user does on another page is all you care about then there are lots of options.
You can pass small sets of data like stuff the user entered into tables via cookies for accessing the right sets of image files in a pre-established scheme. This would actually persist until a user cleared out cookies.
You could wrap both pages in same-domain iframe elements with the parent element containing just the JS. This would allow you to persist data between pages and react to iframe load events but like everything in client-side JS, it's all gone when you reload the page.
Newer browsers have working file access objects that aren't total security nightmares. These are new and non-standard enough that it would take some doing to make it work for multiple browsers. This could be used to save files containing info that the user would probably have to be prompted to upload when they return to the site.
If the data's not sensitive you could get creative and use another service to stash collections of data. Use a twitter API to tweet data to some publicly visible page of a twitter account (check the Terms of Service if you're doing anything more than an isolated class project here). Then do an Ajax get request on whatever URL it's publicly visible at and parse the HTML for your twitter data.
Other stuff I'd look into: dataURIs, html5 local storage.
Note: None of these are approaches I would seriously consider for a professional site where the data was expected to be persistent or in any way secure regardless of where a user accesses it from.