I was convinced that single page applications could not be fetched by google unless the server provided alternative content.
reading this article made me think that while it was true, nowaday it is an error to consider that javascript templating block google's crawling : https://googlewebmastercentral.blogspot.fr/2015/10/deprecating-our-ajax-crawling-scheme.html
Times have changed. Today, as long as you're not blocking Googlebot
from crawling your JavaScript or CSS files, we are generally able to
render and understand your web pages like modern browsers.
I tested with a sample app. with this tool : https://www.google.com/webmasters/tools/googlebot-fetch?utm_source=support.google.com/webmasters/&utm_medium=referral&utm_campaign=6155685
and it worked : google saw my content (whose rendering was triggered by a jquery plugin waiting the dom document ready event to render content with handlebarjs)
So here is the question : what is the state of the art in 2016? (aka : are the sigle page applications referenced by google, and is there a drawback?)
a teamate of mine told me this, I quote hime without any opinion about his testimonial :
I saw on a podcast who ran test showing that the results are
inconsistent : one time the page is corectly indexed, the oter time it
is not. IMHO Google is able to read JS pages but it consume too much
ressources so it is not systematicly done.
Beware also, they annouced
that they was going to stop indexing not visible content like those
shown on click/rollover
To conclude : I think that those pages are indexed but with a lower score.
Related
Suppose, I have a javaScript-heavy single page web application. My Javascript render dom directly from model / datasource (Json).
I came up with an approach to generate simple html from datasource (on backend). This html is required only for search engines to index. After page is loaded, JavaScript will replace this quasi-html with the proper UI. Quasi-html can be removed from layout with display:none to avoid performance penalty on the browser.
Will it work?
Also I am concerned about legitimacy of the approach.
Thoughts?
It should work giving the search engines content to craw even if they don't read javascript. Now bots evolve and they read quite a bit of javascript nowadays, I've created a page that only has 2 sentences onBeforeLoad and uses Ajax to get the rest of the content and I see Google indexing a lot of the keywords delivered by Ajax. A problem would be misleading the search bot, like putting in content irrelevant to your other page content - something the bot might pick up at some point and penalize you for it. "I am concerned about legitimacy of the approach" - I wouldn't be, keep code valid and ride on
Alright so I've been writing Backbone.js apps for over a year now and I love the framework model. I've learned how to avoid all the pitfalls and such, but there's one area I'm still quite weak as a single page app developer: how to SEO a public facing app.
I'm working on a blog project, and the easiest solution to my mind is to have a server generated list of all blog entries visible as a link from the /blog section that is rendered on page load, and to ensure that when hitting a /blog/:id url, the server loads the blog content into the very first div on the page, which will be set as display:none.
My question is if this should be sufficient for a good search engine index? SEO is still my weakest skill as a developer. Are there techniques for making sure a search engine crawls this content first and is able to use that content for its more complex indexing?
Also, is there a way to blacklist the generated app content on the page as I know Google has been testing crawling JavaScript apps? In my mind that could never be done at the level it needs to be without some sort of standard browser level event that can be triggered on a full page render or after all data has been loaded.
Anyways, this is more of an ambiguous ticket I know, but it could end up being useful to people in the future if we get a collection of good answers here.
Most of the major search engines (including Google) are rendering the content they receive from the website, in our (Google's) case with something close to a headless browser, so whatever you do for the users the search engines will also get it. Serving different stuff to search engines however will get you into a dangerous area, named cloaking.
Hiding the content with a display:none might backfire on you. We are giving hidden content way less weight in ranking.
i wonder if content loaded dynamically by AJAX affect SEO/ability for search engines to index the page?
i am thinking of doing a constantly loading page, something like the Tumblr dashboard where content is automatically loaded as the user scrolls down.
A year later...
A while back Google came out with specifications for how to create XHR content that may be indexed by search engines. It involves pairing content in your asynchronous requests with synchronous requests that can be followed by the crawler.
http://code.google.com/web/ajaxcrawling/
No idea whether other search giants support this spec, or whether Google even does. If anybody has any knowledge about the practicality of this method this I'd love to hear about their experience..
Edit: As of today, October 14, 2015, Google has deprecated their AJAX crawling scheme:
In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. ... Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.
H/T: #mark-bembnowski
Five years later...
Latest update on SEO AJAX:
As of October 14, 2015
Google now is able to crawl and parse AJAX loaded content.
SPA or other AJAX rendered page no longer needed to prepare two versions of websites for SEO.
Short answer: It depends.
Here's why - say you have some content that you want to have indexed - in that case loading it with ajax will ensure that it won't. Therefore that content should be loaded normally.
On the other hand, say you have some content that you wish to index, but for one reason or another you do not wish to show it (I know this is not recommended and is not very nice to the end user anyway, but there are valid use cases), you can load this content normally, and then hide or even replace it using javascript.
As for your case where you have "constantly loading" content - you can make sure it's indexed by providing links to the search engines/non-js enabled user agents. For example you can have some twitter-like content and at the end of it a more button that links to content starting from the last item that you displayed. You can hide the button using javascript so that normal users never know it's there, but the crawlers will index that content (by clicking the link) anyway.
If you have some content loaded by an Ajax request, then, it is only loaded by user-agents that run Javascript code.
Search-engine robots generally don't support Javascript (or not well at all).
So chances are that your content that's loaded by an Ajax request will not be seen by search engines crawlers -- which means it will not be indexed ; which is not quite good for your website.
Crawlers don't run JavaScript, so no, your content will not be visible to them. You must provide an alternative method of reaching that content if you want it to be indexed.
You should stick to what's called "graceful degradation" and "progressive enhancement". Basically this means that your website should function and content should be reachable when you start to disable some technologies.
Build your website with a classic navigation, and then "ajaxify" it. This way, not only is it indexed correctly by search engines, it's also friendly for users that browse it with mobile devices / with JS disabled / etc.
Two years later, Bing and Yahoo search engines also now support Google's Ajax Crawling Standard. Information over the standard can be found here: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started.
The accepted answer on this question is no longer accurate. Since this post still shows in search results, I'll summarize the latest facts:
Sometime in 2009, Google released their AJAX crawling proposal. Other search engines added support for this scheme shortly thereafter. As of today, October 14, 2015, Google has deprecated their AJAX crawling scheme:
In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. ... Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.
I'm making an app that requires dynamic content be fully rendered on the page for search engine bots - a problem, potentially, should I use JS templating to control the content. Web spiders are supposedly getting better at indexing RIA sites, but I don't want to risk it. Also, as mobile internet is still spotty in most places, it seems like a good practice to maximize the server load initially to ensure that basic functionality/styles/dynamic content show up on your pages, even if the client hasn't downloaded any JS libraries.
That's how I stumbled upon dual-side templating:
Problem: How can you allow for dynamic, Ajax-style, rendering in the browser, but at the same time output it from the server upon initial page load?
c. 2010: Dual-Side Templating A single template is used on both browser and server, to render content wherever itβs appropriate β typically the server as the page loads and the browser as the app progresses. For example, blog comments. You output all existing comments from the server, using your server-side template. Then, when the user makes a new comment, you render a preview of it β and the final version β using browser-side templating.
I want to try dual-side templating with Node.js and Eco templates, but I don't know how to proceed. I'm new to JavaScript and all things Node.
Node-Lift is said to help, but I don't understand what it's doing or why.
Can someone provide a high level overview of how you might use dual-templating in the context of a mobile web app?
Where does server-side DOM manipulation with jQuery and JSDOM fit in to the equation?
TIA
Dav Glass gave a great talk about this last year: http://www.youtube.com/watch?v=bzCnUXEvF84
And here is a blog article that goes over some of the details: http://www.yuiblog.com/blog/2010/04/09/node-js-yui-3-dom-manipulation-oh-my/
If I decided to use some javascipt in my website like
$('#body').load(URL);
or
$.get(URL, {param:value}, function(){ ... });
or
window.title = 'TEXT';
Is it good for SEO? Or am I recommended to use pure PHP for data on the page for SEO purposes?
The question of if javascript is good for SEO or not is missing the point. We should pretty much assume that any content which is only available by javascript will not be crawled by the search engines. Google at least claims to be able to crawl some javascript only content but is fairly tight lipped about what exactly they can crawl. Other search engines probably don't crawl it and it's certainly the case that not all do. So assume it doesn't get crawled.
That doesn't mean it's bad for SEO.
If the content will contribute to your SEO, then it's bad for SEO. If the content is neutral to SEO, then it's neutral for SEO. So the answer to your question really depends on the nature of your content. If the content is part of your SEO campaign, then stick with server-side HTML generation be it PHP or some other method. Otherwise the question of SEO has no bearing on the decision to to use javascript or not. Accessibility would be another thing to take into account. Javascript only content is terrible for that.
The larger search engines can/do render limited amounts of javascript. However, for SEO purposes your best bet is rendering the content via HTML rather than javascript. A good rule of thumb is to utilize HTML for content/expressing limited content structure (e.g. paragraph type text = p, lists = ul/ol, headings = h1/h2/h3, etc...), CSS for presentation, and JS for client side programming. With that being said, always ensure a good user experience first. If you can do the above while providing a great user experience, great! If you can't, users first. Its likely you can keep both users and bots happy 95% of the time if you take the time to do so.
Further reading (sorry, I can only post one link as a new user):
Matt Cutts Interview (Check out #26 on Google Javascript Rendering)
A spider's view of Web 2.0
EDIT Added that for "a new user" ;) ~ drachenstern
I think first you should consider what SEO means. It means "Search Engine Optimization" ... how does a search engine get data in the first place for it to be optimized?
It does a GET on the page and whatever data is returned in the GET is processed. No JS engine. No POST data. So you should be optimizing for whatever data is returned on a GET.
Additionally, you tagged this with PHP, but the question has nothing to do with PHP.
Have you seen any of the questions on this list?
https://stackoverflow.com/search?q=javascript+seo
No Sir, Google does not translate flash and java script properly so it may not crawl those area using java script or flash content. I suggest you should keep your website simple but if it is necessary to keep flashy/java script content then you should keep a text base backup.
The first thing you should be asking is not what is good for SEO, but what is good for users. For users, loading data with JavaScript will give them an interactive page, where they can start seeing the page immediately while it is still loading and where the page can update without having to reload it.
From Google's Webmaster Guidelines and article on Cloaking, you should not assume that crawlers can understand JavaScript. This does not mean that you should not use JavaScript on your website, but rather that you should provide the textual equivalent in noscript tags, for use both by users with JavaScript disabled
as well as for crawlers, bearing in mind that the content of these noscript tags should be roughly equivalent to what was shown with JavaScript enabled; showing different content to users and to search engines is called "cloaking" and is frowned upon to say the least.
Google doesn't (yet) execute a page's Javascript (JS). So if your JS replaces/creates content on a page then the content would normally be invisible to the crawlers (not good).
But, the Googlers have implemented a url hack that enables your server to create pages (from the server, not from JS), with all the different varients of your JS page's content.
This solves the SEO problem of Ajax powered pages. At least for Google searches...
See Crawable Ajax
Javascript or any scripts for that matter should never be used to house your sites content, ever! The entire web is driven by HTML and CSS, and in rare cases XML languages, everything else is a headache when it comes to SEO. Ask yourself this question, what exactly is SEO and what is it that search engines are indexing? Javascript and all programming/scripting languages are proprietary, this means that they are NOT standards as defined by the W3C, which means they are essentially worthless when it comes to indexing content. On the other hand, HTML, CSS, and XML are real standards developed for the web! It's ok to use scripts to add additional functionality to your pages, embed apps like social networking plugins, etc, but you should never use them to hold your websites HTML, CSS, or actual content ever, for any reason. Here's a link to a good article that will explain why you should be using HTML and CSS, and not a million scripts, optimizing webpages using proper html markup. Scripts cause other problems besides code that is hard for search engines to decipher. For one, they are harder for browsers to process, causing pages to load much slower than "static" pages made with HTML and CSS would. Pages made with PHP tend to create "dynamic" URL's that users and search engines cannot read. This is why Google recommends people who use jsp or PHP for their webpages include a sitemap, otherwise your links will never be found and might as well not exist. Stick to the conventions! Lets face it, we have standards for a reason. If every electronic component in your home required had a different type of plug that required a special socked, and all those devices had differing voltage and amperage requirements, what would happen? You would essentially burn down your house! And, you'd be spending 5 hours a day at the hardware store looking for those special adapters to fit your wall sockets with. If you plan on designing a website, use scripts for embedding apps or connecting with a database only, and use HTML and CSS to build "static" webpages. Also, use text links, as they are both human and search engine readable, and easy to index and make sense of. Never use scripts for your links. Programming and scripting can be fun, but not on the internet its not.
Search engines index HTML, CSS, and content (multi-media, graphics, videos, text, thats it!) everything else is pointless and annoying to both users and search engines alike. For best results use XML and design a custom language.
Google can crawl, index and rank javascript generated content.
But... it uses an old Chrome version (42) with an old javascript render engine.
The consequence is that your javascript code needs to work in older browsers and older chrome versions (older than 42). So no fancy ES6 functions, you need to use polyfills or use Babel for example.
Although you can do a lot with javascript (like click events or inject your mobile menu), it's recommended to still use a-href instead of a button with a javascript event and then using a function to get to a new page.
You can check the mobile testing tool from Google: https://search.google.com/test/mobile-friendly and check the errors/warnings/logs. If the rendered output looks like intended, Google will see your content.
In search console you can also ask to index the page. Sometimes the javascript crawler is first, sometimes the 'classic' crawler.
Doublecheck it some days afterward by googling a sentence or paragraph from your page.
There's no answer about if it's better or not. Content is content and Google should rank your website, SPA, PWA, AMP site, PDF document, online Doc, wikipage, and so on based on their content, not on the underlying technique.
If you are familiar with JavaScript, give it a go.
Regards, Peter