Understanding ms-seo for meteor - javascript

I'm trying to use ms-seo package for meteor but I'm not understanding how it works.
It's supposed to add meta tags to your page for crawlers and social media (google, facebook, twitter, etc...)
To see it working according to the docs all I should have to do is
meteor add manuelschoebel:ms-seo
and then add some defaults
Meteor.startup(function () {
if(Meteor.isClient){
return SEO.config({
title: 'Manuel Schoebel - MVP Development',
meta: {
'description': 'Manuel Schoebel develops Minimal Viable Producs (MVP) for Startups',
},
og: {
'image': 'http://manuel-schoebel.com/images/authors/manuel-schoebel.jpg',
}
});
}
});
which I did but that code only executes on the client (browser). How is that helpful to search engines?
So I test it
curl http://localhost:3000
Results have no tags
If In the browser I go to http://localhost:3000 and inspect the elements in the debugger I see the tag but if I check the source I don't.
I don't understand how client side added tags have anything to do with SEO. I thought Google, Facebook, Twitter when scanning your page for meta tags basically just do a single request. Effectively the same as curl http://localhost:3000
So how does this package actually do anything useful? I feel stupid. 27k users it must work but I don't understand how. Does it require the spiderable package to get static pages generated?

You are correct. You need to use something like the spiderable package or prerender.io to get this to work. This package will add tags, but like any Meteor page, it's rendered on the client.
Try this with curl to see the result when using spiderable:
curl http://localhost:3000/?_escaped_fragment_=
Google will now render the JS itself so for Google to index your page correctly you don't need to use spiderable/prerender.io, but for other search engines I believe you still do have to.

An alternate answer:
Don't use spiderable, as it uses PhantomJS which is rather resource intensive when bots crawl your site.
Many Meteor devs are using Prerender these days, check it out.

If you still have some problems with social share buttons or the package, try to read this: https://webdevelopment7636.wordpress.com/2017/02/15/social-share-with-meteor/ . It was the only way I got mine to work. You don't have to worry about phantomJS or spiderable to make it work fine.
It is a complete tutorial using meteorhacks:ssr and meteorhacks:picker. You have to create a crawler filter on the server side and a route that will be called by it when it is activated. The route will send dynamically the template and the data to a html on the "private" folder, and will render the html to the crawler. The template on the private folder will be the that gets the metatags and the tag.
This is the file that will be on the private folder
I can't put the other links with the code here, but if you need anymore help, go to the first link and see if the tutorial helps.

Related

AngularJS - SEO - S3 Static Pages

My application uses AngularJS for frontend and .NET for the backend.
In my application I have a list view. On clicking each list item, It will fetch a pre rendered HTML page from S3.
I am using angular state.
app.js
...
state('staticpage', {
url: "/staticpage",
templateUrl: function (){
return 'http://xxxxxxx.cloudfront.net/staticpage/staticpage1.html';
},
controller: 'StaticPageCtrl',
title: 'Static Page'
})
StaticPage1.html
<div>
Hello static world 1!
<div>
How do I do SEO here?
Do I really need to do HTML snapshot using PanthomJS or so.
Yes PhantomJS would do the trick or you can use prerender.io with that service you can just use their open source renderer and have your own server.
Another way is to use _escaped_fragment_ meta tag
I hope this helps, if you have any questions add comments and I will update my answer.
Do you know that google renders html pages and executes javascript code in the page and does not need any pre-rendering anymore?
https://webmasters.googleblog.com/2014/05/understanding-web-pages-better.html
And take a look at these :
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
http://wijmo.com/blog/how-to-improve-seo-in-angularjs-applications/
My project front-end also has biult on top of Angular and I decieded to solve SEO issue like this:
I've created an endpiont for all search engines (SE) where all the requests go with _escaped_fragment_ parameter;
I parse a HTTP Request for _escaped_fragment_ GET parameter;
I make cURL request with parsed category and article parameters and get the article content;
Then I render a simpliest (and seo friendly) template for SE with the article content or throw a 404 Not Found Exception if article does not exists;
In total: I do not need to prerender some html pages or use prrender.io, have a nice user interface for my users and Search Engines index my pages very well.
P.S. Do not forget to generate sitemap.xml and include there all urls (with _escaped_fragment_) wich you want to be indexed.
P.P.S. Unfortunately my project's back-end has built on top of php and can not show you suitable example for you. But if you want more explanations do not hesitate to ask.
Firstly you can not assume anything.
Google does say that there bots can very well understand javascript application but that is not true for all scenarios.
Start from using crawl as google feature from the webmaster for your link and see if page is rendered properly. If yes, then you need not read further.
In case, you see just your skeleton HTML, this is because google bot assumes page load complete before it actually completes. To fix this you need an environment where you can recognize that a request is from a bot and you need to return it a prerendered page.
To create such environment, you need to make some changes in code.
Follow the instructions Setting up SEO with Angularjs and Phantomjs
or alternatively just write code in any server side language like PHP to generate prerendered HTML pages of your application.
(Phantomjs is not mandatory)
Create a redirect rule in your server config which detects the bot and redirects the bot to prerendered plain html files (Only thing you need to make sure is that the content of the page you return should match with the actual page content else bots might not consider the content authentic).
It is to be noted that you also need to consider how will you make entries to sitemap.xml dynamically when you have to add pages to your application in future.
In case you are not looking for such overhead and you are lacking time, you can surely follow a managed service like prerender.
Eventually bots will get matured and they would understand your application and you will say goodbye to your SEO proxy infrastructure. This is just for time being.
At this point in time, the question really becomes somewhat subjective, at least with Google -- it really depends on your specific site, like how quickly your pages render, how much content renders after the DOM loads, etc. Certainly (as #birju-shaw mentions) if Google can't read your page at all, you know you need to do something else.
Google has officially deprecated the _escaped_fragment_ approach as of October 14, 2015, but that doesn't mean you might not want to still pre-render.
YMMV on trusting Google (and other crawlers) for reasons stated here, so the only definitive way to find out which is best in your scenario would be to test it out. There could be other reasons you may want to pre-render, but since you mentioned SEO specifically, I'll leave it at that.
If you have a server-side templating system (php, python, etc.) you can implement a solution like prerender.io
If you only have AngularJS-only files hosted on a static server (e.g. amazon s3) => Have a look at the answer in the following post : AngularJS SEO for static webpages (S3 CDN)
yes you need to prerender the page for the bots, prrender.io
can be used and your page must have the
meta tag
<meta name="fragment" content="!">

Can I put css and html markup inside a js flashify message

I am working on a project built with sails.js. I have taken over this project and I'm not very familiar with sails.js yet so I'm still getting my bearings.
From what I can see, the sails package includes flashify and the previous developer is using it in several places to display notification. The client wants a few of those notifications to be a little fancier than just the default popup and I'm wondering if it's possible to somehow restyle the flashify popup and add html markup to it.
Here's one instance where a message is initiated:
if(!req.isAuthenticated()){
req.flash("error", "Please log in");
return res.redirect('back');
}
I have tried adding html tags to the message string, escaping them and taking them out of the string quotes but none of that worked. Is it possible at all?
A couple of things:
Sails uses connect-flash by default. Your project might actually be using Flashify, but if you don't actually see require('flashify') in the code, it probably isn't.
More importantly, both connect-flash and flashify are back-end packages that just save messages into the session so that they survive between requests. They have nothing to do with the actual display of the message, which is handled in your template (most likely an .ejs file in the views folder). If there's a popup, then that's being created and launched in that template, so that's where you'd want to style it. Take a look at the Sails views documentation for more about view rendering, and also this question about rendering flash messages in a view.

Get URL and redirect before page load

So, i have an interesting situation. I've been working on re-organizing a directory on a website. I updated old files there's about 100 of them, they are in a new location. The old files have been taken down.
The problem I have is there are probably hundreds of people that have bookmarks directly to the URL of the old files. (e.i. "wahwah.com/subSite/pdfs/something.pdf") these files are 5 years old so they need to find the new ones anyways.
So instead of having a page for each individual file, Can I have something in the directory that used to house the files to watch for that URL and redirect to the new page?
It would watch for "wahwah.com/subSite/pdfs.." and redirect. Or maybe something in the main directory of this subSite to watch for the URL to have the /pdf path in it.
I know I can grab URLs in java script but that doesn't help me unless I can do what I stated above. I'm not sure how if at all I could do it in .NET. our servers support .NET because most of our site apps were made with it but I don't deal with those. I cannot use PHP, the servers don't use it.
I'm hoping JavaScript will be able to do it somehow, but it's something i've never tried before so just thinking about it i'm not sure I can. I'm not much for using JS libraries so Im not sure what is out there i've been searching a bit though.
I found Grunt but i'm not entirely sure how it works just yet. Just looking around maybe the file filter or matchBase. or some of the Global patterns.
If you have access to server, your best option is to set up redirect in there on wahwah.com/subSite/pdfs/ directory.
How to do this depends on if you're on IIS or unix.
In asp.net, 301 redirect is fairly efficient.
if (HttpContext.Contains("http://old.aspx"))
{
HttpContext.Current.Response.Status = "301 Moved Permanently";
HttpContext.Current.Response.AddHeader("http://www.new.aspx");
}
Or in page load you can write:
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location","http://new.aspx");

take content from WordPress page and deliver it to HTML via ajax

I have the following problem:
HTML blank page on server 1.
WordPress site on server 2.
What I need is to call the content from www.wordpress.site/sample-page/ to HTML page on server 1, but not the entire page, only the part that I can edit from wp-admin; so without header and footer.
Also, I don't know if there is any other method, but I need it to be done via JavaScript/jQuery or Ajax.
I've used Google, but is hard to get a tutorial for this, I've tried a lot of tutorials, but none is what I need, and I don't know that much JavaScript to make it work.
SO, can someone help me please?
BIG Thanks!
Andrei
L.E.:
I've found this working: http://jsfiddle.net/mdawaffe/hLWdH/
It is working as it is written, if I try to change the domain with mine, will not work.
What script do I have to implement on the server from which the content is called (taken)?
For more information, as you asked:
I have a HTML + CSS + JS template that I will use with phonegap (if you don't know about it, try it, it's very useful) to create a mobile app for Android, iOS, and BlackBerry.
Now, I have this site: m.trafficvoice.ro (I hope I can post links here).
In the 'live stream' page (it's called services.html), I have a HTML5 audio tag/player.
What I need, is to get from www.trafficvoice.ro/whatever-the-name-page, the content, but only the part that I can edit in WordPress (so without header and footer).
Why? Because in the future there will be more stream to add, and maybe some of them will be down due to unknown reason, so I need to update that page, without making an update for the entire app, upload it to the store, wait for approval, the client to download it, etc.
Big thanks!
Andrei
Could you just use an iframe instead? You could modify a template in your theme to not display header/footer and then use that in the iframe.

How to include a .js file in PL/SQL on Oracle10gXE

again. I'm making a PL/SQL generated HTML5 web page. It's running a Oracle 10g XE server. Okay, now when the setup is clear, my problem - I need to include a Java Script file in the page. Simply
HTP.P('<script type="text/javascript" src="js/ScriptFileName.js"></script>');
Doesn't work of course. So i created a folder object and granted read,write to PUBLIC. Then changed the string to match the newly created object, instead of path. Still doesn't work. I know, i can write
HTP.P(<script type="text/javascript"> MY JAVA SCRIPT HERE</script>);
And i've done so with other scripts(Even had to write CSS this way). But this time this will not work. Reason being - the JavaScript i'm trying to run was normalized(or rather unnormalized), so it's written all in one line. And there is a lot of it too. I tried to reverse it to normal, but faild many a time.
So, I went online and searched for a solution. Found one. It seem's that this include should go not to the page, but to server config. Makes sense, since PL/SQL is server sided. But when i went looking for the usual httpd.conf, it's nowhere to be found in Database directory.So i went online again, result - NOT A WORD OF WHERE THE HELL ARE HTTP SERVER CONFIGS IN 10gXE IN ANY ORACLE MANUALS. Searched some forums - exactly 1 person asked where httpd.conf in XE is, and didn't get an answer. Please, help. I'm desperate.
P.S. I don't use APEX. I don't get that mumbo-jumbo. So i write in Notepad and run the scripts in SQL line.
Firstly, XE has its own built in HTTP server called the 'Embedded PL/SQL Gateway' or EPG. But you don't HAVE to use that. You can use an Oracle HTTP Server with the mod_plsql plugin. Or you can use the Apex listener.
The question is on what server is "ScriptFileName.js" ?
Is it a flat file on the database server ? If so, you'll need to use the Oracle HTTP Server (or Apache or similar) to serve it. The database is pretty much unaware of files on its server and the EPG can't deliver them. [At least not in any practical sense, you could do weird things with chicken entrails and UTL_FILE, but you don't want to go there.]
Is it a file stored in the database ? That sounds exotic, but it is pretty much how all the CSS, images etc are served up through the EPG. The best explanation on how to get files in and out of there is by Dietmar
Is it a file stored on a separate machine ? Often the best answer. The "src=" directive will be read by the end users browser. That will do an HTTP get to the URL. It doesn't have to be a URL on the same domain/host as the rest of the page.

Categories

Resources