I have some text that includes URLs to GitHub Gists. I'd like to look for those URLs and put the Gist inline in the content client-side. Some things I've tried:
A direct lookup to GitHub's OEmbed API.
For https://gist.github.com/733951, this means I do a JSON-P lookup to
https://github.com/api/oembed?format=json&url=https%3A%2F%2Fgist.github.com%2F733951,
extract the html property of the object, and add adding that to my page. The problem
here is that GitHub's OEmbed API only returns the first three lines of the Gist.
Using the jQuery-embedly plugin.
Calling
jQuery('a.something').embedly({allowscripts: true})
works, but Embedly strips formatting from the Gist. Wrapping it in a <pre> tag doesn't help because there are no line-breaks.
Using GitHub's .js version of the gist.
https://gist.github.com/733951.js uses document.write, so I don't have any control over where in the page when I require it dynamically. (If I could write it into the HTML source it would show up in the right place, but this is all being done client-side.)
I've been inspired by client side gist embedding and built a script.js hack library just for that (I also use it to remove the embedded link style and use my own style that fits better on my page) ...
It's more generic than just embedding gists and pasties - actually I'm using it to dynamically load some third-party widgets / google maps / twitter posts) ...
https://github.com/kares/script.js
Here's the embedding example :
https://github.com/kares/script.js/blob/master/examples/gistsAndPasties.html
UPDATE: gist's API since then supports JSONP, jQuery sample:
var printGist = function(gist) {
console.log(gist.repo, ' (' + gist.description + ') :');
console.log(gist.div);
};
$.ajax({
url: 'https://gist.github.com/1641153.json',
dataType: 'jsonp', success: printGist
});
I just started a project called UrlSpoiler (on github). It will help you embed gists dynamically. It's hosted on Heroku on the free/shared platform so you can play with it, but I'd recommend copying the code you need into your own app for production use.
Related
My application uses AngularJS for frontend and .NET for the backend.
In my application I have a list view. On clicking each list item, It will fetch a pre rendered HTML page from S3.
I am using angular state.
app.js
...
state('staticpage', {
url: "/staticpage",
templateUrl: function (){
return 'http://xxxxxxx.cloudfront.net/staticpage/staticpage1.html';
},
controller: 'StaticPageCtrl',
title: 'Static Page'
})
StaticPage1.html
<div>
Hello static world 1!
<div>
How do I do SEO here?
Do I really need to do HTML snapshot using PanthomJS or so.
Yes PhantomJS would do the trick or you can use prerender.io with that service you can just use their open source renderer and have your own server.
Another way is to use _escaped_fragment_ meta tag
I hope this helps, if you have any questions add comments and I will update my answer.
Do you know that google renders html pages and executes javascript code in the page and does not need any pre-rendering anymore?
https://webmasters.googleblog.com/2014/05/understanding-web-pages-better.html
And take a look at these :
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
http://wijmo.com/blog/how-to-improve-seo-in-angularjs-applications/
My project front-end also has biult on top of Angular and I decieded to solve SEO issue like this:
I've created an endpiont for all search engines (SE) where all the requests go with _escaped_fragment_ parameter;
I parse a HTTP Request for _escaped_fragment_ GET parameter;
I make cURL request with parsed category and article parameters and get the article content;
Then I render a simpliest (and seo friendly) template for SE with the article content or throw a 404 Not Found Exception if article does not exists;
In total: I do not need to prerender some html pages or use prrender.io, have a nice user interface for my users and Search Engines index my pages very well.
P.S. Do not forget to generate sitemap.xml and include there all urls (with _escaped_fragment_) wich you want to be indexed.
P.P.S. Unfortunately my project's back-end has built on top of php and can not show you suitable example for you. But if you want more explanations do not hesitate to ask.
Firstly you can not assume anything.
Google does say that there bots can very well understand javascript application but that is not true for all scenarios.
Start from using crawl as google feature from the webmaster for your link and see if page is rendered properly. If yes, then you need not read further.
In case, you see just your skeleton HTML, this is because google bot assumes page load complete before it actually completes. To fix this you need an environment where you can recognize that a request is from a bot and you need to return it a prerendered page.
To create such environment, you need to make some changes in code.
Follow the instructions Setting up SEO with Angularjs and Phantomjs
or alternatively just write code in any server side language like PHP to generate prerendered HTML pages of your application.
(Phantomjs is not mandatory)
Create a redirect rule in your server config which detects the bot and redirects the bot to prerendered plain html files (Only thing you need to make sure is that the content of the page you return should match with the actual page content else bots might not consider the content authentic).
It is to be noted that you also need to consider how will you make entries to sitemap.xml dynamically when you have to add pages to your application in future.
In case you are not looking for such overhead and you are lacking time, you can surely follow a managed service like prerender.
Eventually bots will get matured and they would understand your application and you will say goodbye to your SEO proxy infrastructure. This is just for time being.
At this point in time, the question really becomes somewhat subjective, at least with Google -- it really depends on your specific site, like how quickly your pages render, how much content renders after the DOM loads, etc. Certainly (as #birju-shaw mentions) if Google can't read your page at all, you know you need to do something else.
Google has officially deprecated the _escaped_fragment_ approach as of October 14, 2015, but that doesn't mean you might not want to still pre-render.
YMMV on trusting Google (and other crawlers) for reasons stated here, so the only definitive way to find out which is best in your scenario would be to test it out. There could be other reasons you may want to pre-render, but since you mentioned SEO specifically, I'll leave it at that.
If you have a server-side templating system (php, python, etc.) you can implement a solution like prerender.io
If you only have AngularJS-only files hosted on a static server (e.g. amazon s3) => Have a look at the answer in the following post : AngularJS SEO for static webpages (S3 CDN)
yes you need to prerender the page for the bots, prrender.io
can be used and your page must have the
meta tag
<meta name="fragment" content="!">
I'm trying to use ms-seo package for meteor but I'm not understanding how it works.
It's supposed to add meta tags to your page for crawlers and social media (google, facebook, twitter, etc...)
To see it working according to the docs all I should have to do is
meteor add manuelschoebel:ms-seo
and then add some defaults
Meteor.startup(function () {
if(Meteor.isClient){
return SEO.config({
title: 'Manuel Schoebel - MVP Development',
meta: {
'description': 'Manuel Schoebel develops Minimal Viable Producs (MVP) for Startups',
},
og: {
'image': 'http://manuel-schoebel.com/images/authors/manuel-schoebel.jpg',
}
});
}
});
which I did but that code only executes on the client (browser). How is that helpful to search engines?
So I test it
curl http://localhost:3000
Results have no tags
If In the browser I go to http://localhost:3000 and inspect the elements in the debugger I see the tag but if I check the source I don't.
I don't understand how client side added tags have anything to do with SEO. I thought Google, Facebook, Twitter when scanning your page for meta tags basically just do a single request. Effectively the same as curl http://localhost:3000
So how does this package actually do anything useful? I feel stupid. 27k users it must work but I don't understand how. Does it require the spiderable package to get static pages generated?
You are correct. You need to use something like the spiderable package or prerender.io to get this to work. This package will add tags, but like any Meteor page, it's rendered on the client.
Try this with curl to see the result when using spiderable:
curl http://localhost:3000/?_escaped_fragment_=
Google will now render the JS itself so for Google to index your page correctly you don't need to use spiderable/prerender.io, but for other search engines I believe you still do have to.
An alternate answer:
Don't use spiderable, as it uses PhantomJS which is rather resource intensive when bots crawl your site.
Many Meteor devs are using Prerender these days, check it out.
If you still have some problems with social share buttons or the package, try to read this: https://webdevelopment7636.wordpress.com/2017/02/15/social-share-with-meteor/ . It was the only way I got mine to work. You don't have to worry about phantomJS or spiderable to make it work fine.
It is a complete tutorial using meteorhacks:ssr and meteorhacks:picker. You have to create a crawler filter on the server side and a route that will be called by it when it is activated. The route will send dynamically the template and the data to a html on the "private" folder, and will render the html to the crawler. The template on the private folder will be the that gets the metatags and the tag.
This is the file that will be on the private folder
I can't put the other links with the code here, but if you need anymore help, go to the first link and see if the tutorial helps.
I'm setting up the API documentation for a project, and wanted to know what the best tool for the job is..
The site is completely static EXCEPT for the API keys, which I'd like to include in the code examples depending on the user (the user gets their own API key if they're logged in).
How can I achieve this, while maintaining a static site (I'm using a static-site generator, middleman).
I would suggest you to include small ajax script on all pages, which will perform search-and-replace through the page.
On the static page you will have code like this:
<!-- EMPTY SPAN IN PAGE TEMPLATE -->
<span class='api-key'></span>
everywhere you want to have api keys embedded. The script will perform the simple task of search-and-replace (pseudocode follows, assuming you have jQuery on the page):
$(document).ready(function () {
$.get( "/api/key", function( data ) { /* supply credentials if needed */
$('.api-key').html( data );
}
});
Hope it helps.
Is there any URL google has that contains the raw data for the file? using https://drive.google.com/file/d/FILE_ID just takes you to a 'share' section of the file... say I have a .js file on GDrive. If you go to their share link, they have a share page. Is there any link to get the raw javascript from the file, as to use in a <script src="google_link_or_whatever">?
First, go to the sharing settings for your document and choose "Anyone with a link." It will generate a link in the format https://drive.google.com/file/d/XXX/view?usp=sharing.
Now you can use the following URL format: https://drive.google.com/uc?id=XXX
Note that I'm seeing an HTTP redirect when I do this, so use curl -L on the command line or otherwise make sure that your HTTP client follows redirect.
Sharing link:
https://drive.google.com/file/d/YOUR_ID/view?usp=sharing
Raw download link:
https://drive.google.com/uc?export=download&id=YOUR_ID
NOTE - THIS WAS THE SOLUTION BUT IT NO LONGER WORKS - SEE COMMENT BY #Bobby Fritze below
No API's and no JS necessary.
Confirmed now working on latest version of Drive.
Great workaround for if your server doesn't use https but a vendor plugin demands https to call in a CSS or other file:
On the folder with your intended file (e.g. FILE.css), hit Sharing Settings, then Advanced, then select "Public on the web - Anyone on the Internet can find and view."
In the URL bar (or share link), copy everything after the drive.google.com/drive/u/0/folders/
Use that ID to replace the XX-XXXXXXXXXXXXX in: http://googledrive.com/host/XX-XXXXXXXXXXXXX/FILE.css
Navigate to the appended URL in Step 3 and you will now see your raw data.
My use case below:
<script type="text/javascript">
var vsDisableResize = false;
var vsCssUrl = 'https://cbe7c864b9c1ae8d5be60c7fed3e467334a04d2f.googledrive.com/host/0B9ngkmVbo5T7TDhTTU81M25iNnc/cart.css';
var vsWineryId = '850';
var vsWineListId = '71';
Credit to #chris.huh at: https://productforums.google.com/forum/#!topic/drive/MyD7dgLJaEo
I have a .csv file that I wish to load that contains information that the .HTML page will format itself with. I'm not sure how to do this however,
Here's a simple image of the files: http://i.imgur.com/GHfrgff.png
I have looked into HTML5's FileReader and it seems like it will get the job done but it seems to require usage of input forms. I just want to load the file and be able to access the text inside and manipulate it as I see fit.
This post mentions AJAX, however the thing is that this webpage will only ever be deployed locally, so it's a bit iffy.
How is this usually done?
Since your web page and data file are in the same directory you can use AJAX to read the data file. However I note from the icons in your image that you are using Chrome. By default Chrome prevents just that feature and reports an access violation. To allow the data file to be read you must have invoked Chrome with a command line option --allow-file-access-from-files.
An alternative, which may work for you, is to use drag the file and drop into onto your web page. Refer to your preferred DOM reference for "drag and drop files".
You can totally make an ajax request to a local file, and get its content back.
If you are using jQuery, take a look at the $.get() function that will return the content of your file in a variable. You just to pass the path of your file in parameter, as you would do for querying a "normal" URL.
You cannot make cross domain ajax requests for security purposes. That's the whole point of having apis. However you can make an api out of the $.get request URL.
The solution is to use YQL (Yahoo Query Language) which is a pretty nifty tool for making api calls out of virtually any website. So then you can easily read the contents of the file and use it.
You might want to look at the official documentation and the YQL Console
I also wrote a blog post specifially for using YQL for cross domain ajax requests. Hope it helps
You can try AJAX (if you do not need asynchronous processing set "async" to false. This version below ran in any browser I tried when employed via a local web server (the address contains "localhost") and the text file was indeed in the UTF-8-format. If you want to start the page via the file system (the address starts with "file"), then Chrome (and likely Safari, too, but not Firefox) generates the "Origin null is not allowed by Access-Control-Allow-Origin."-error mentioned above. See the discussion here.
$.ajax({
async: false,
type: "GET",
url: "./testcsv.csv",
dataType: "text",
contentType: "application/x-www-form-urlencoded;charset=UTF-8",
success: function (data) {
//parse the file content here
}
});
The idea to use script-files which contain the settings as variables mentioned by Phrogz might be a viable option in your scenario, though. I was using files in the "Ini"-format to be changed by users.