SEO and crawling: UI-Router ui-sref VS ng-click - javascript

After looking around a bit I came to no conclusion about this matter: does Google and other search engines crawl pages that are only accessible through ng-click, without an anchor tag? Or does an anchor tag always need to be present for the crawling to work successfully?
I have to build various elements which link to other pages in a generic way and ng-click is the best solution for me in terms of flexibility, but I suppose Google won't "click" those elements since they have no anchor tag.
Besides the obvious ui-sref tag with I have about other solutions like:
<a ng-click = 'controller.changeToLink()'>Link name</a>
Altough I am not sure if this is a good practice either.
Can someone please clarify this issue for me? Thanks.

Single page applications are in general very SEO unfriendly, ng-click not being followed being the least of the problems.
The application does not get rendered server side, so search engine crawlers have a hard time properly indexing the content.
According to this latest recommendation, the Google crawler can render and index most dynamic content.
The way that it will work is that it will wait for the Javascript to kicking and render the application, and only index after the content is injected in the page. This process is not 100% proof and single page applications cannot compete with static applications until recently.
This is the main reason why most sites are using them for their menu system, as that would make for a much better user experience than full page reloads. Single page apps are not SEO friendly.
This is slowly changing as now Angular Universal, Ember Fast Boot and React are adding the possibility to render server side an SEO friendly page, but still have it take over as SPA on the client side.
I think your best bet to try to improve your SEO is to submit a site map file to google using their webmaster tools. This will let google know about those pages that you trigger via ng-click.
Note that this only has a chance of working if you are using the HTML5 mode for the router and not using bookmarks (urls using #), as Google does not index bookmarks.
In general its very hard to get good SEO for an Angular 1 app, and thats why its mostly not used for public indexable content. The sweetspot of AngularJs is for building the "dashboard" private section of your app, that users can access after logging in.

Try using prerender.io to prerendered these angularge pages and filter out bot requests and serve these prerendered pages from the page cache.

Related

How facebook, twitter reload their page without any refresh?

I am so much curious about this technology, I want to know how Facebook, Twitter, and many websites reload their page after clicking on a link without any refreshing?
I search about this on google but did not find any helpful information, In this Quora article.
Someone says that they use WebSocket API or AJAX to request anything like that.
So, What this technique/ technology name?
Mostly all modern websites are powered with FE frameworks like React, Angular, Vue and many others the main feature of which is dynamically construct DOM in response to user actions without the need of page reload.
One of the the power tool of these specific frameworks are routers. That pretty much reconstruct the page from the blueprint stored on FE side
Please have a look on the working demo of React Router:
https://codesandbox.io/s/nn8x24vm60
P.S: Pretty much JS hides/removes specific elements in the DOM and replaces them with the expected ones when user navigates using specific router links (which can look like normal link for other developer inspects DOM, unless you really inspect attached Event Listeners)

NextJS Dynamic Pages cannot be crawled

I'm using NextJS and ExpressJS as Server.
I already implemented the custom routes like the example in the documentation of nextjs (https://nextjs.org/docs#custom-routes-using-props-from-url).
I am also using getInitialProps for server-side rendering.
I also used Screaming Frog SEO Spider as crawler to test if it will be able to crawl my dynamic pages (it can't crawl my dynamic pages, it will just crawl the static pages).
I don't know if I'm doing something wrong but I just followed the documentation for custom routes.
I really want the crawlers to crawl my dynamic pages because it will affect the SEO of our website.
Thanks
There is a common SEO recommendation not to build dynamic websites.
I am not an expert in NextJS and ExpressJS. But in general I can say that most of crawlers don't like dynamic websites. To crawl dynamic website they need to execute JavaScript, it takes time and resources. As far as I know Google can crawl dynamic website, please follow the link. So, it is possible that Googlebot crawl your website successfully. Please, do not build SPA for SEO.
About Screaming Frog SEO Spider. As far as I know it also can use Chromium like Googlebot. Please, read the documentation.
For my project, I added a sitemap.xml.tsx as a page that allows the Google crawler to see all the available pages. In order for this to work, you have to be able to retrieve all the possible dynamic pages that you want to be crawled then create the sitemap.
I would follow the along with the example given here: https://dev.to/timrichter/dynamic-sitemap-with-next-js-41pe on how to correctly implement the sitemap.

angular 6 Multi-Page App How to

I'm developing a complex information system, and our front-end stack is defined to use Angular Framework. I'm aware that Angular was planned and mostly suitable for Single-Page Applications. But at this stage, I'm facing an issue with MPA support from angular 6. Basically, our client's requirement is that to view certain element in a system, it has to be opened in a different tab because normally people will open multiple and use it to gather or compare elements between each other.
My current app is distributed among multiple lazy loaded modules, so my question is what is the best way to implement MPA support for angular, in order to solve this issue? I know that if I open a link in a separate tab, the whole application has to be downloaded by the client and only then he can view the particular page. Can anybody advice on maybe certain solutions for this case, or whether its possible to not download the whole app on a new tab? Thanks.
PS. I've browsed through the whole internet, but haven't found any solution for this.
I am working on a multipage app using angular these days. There we use normal location.href navigation for routing rather than using the angular router module. This way angular app can be used as a multipage app. When we do this, every time when we are reloading the browser, angular bootstrap everything and loads from app component onwards. So when you use lazy loading, you can limit the the no.of modules loading every time the browser refreshed. same happens when you open something in a new tab.
As suggested by #Suresh Kumar Ariya, server side rendering is to just render the static content of a page while the javascript files needed for the dynamic functionality of that page loads in the background. So I don't think this is what you are looking for unless you wanna just serve static content fast for user experience improvement.
What you can do is try to do more lazy loading to minimize the initial loading and optimize your code
You can also opt for Angular with Service Side Rendering Concept. So Data can be shared b/w client and server end. https://angular.io/guide/universal
Thanks everybody for advices, solved the issue with service workers, now all tabs are loading instantly =) thanks to #Patryk Brejdak

Search engine indexing of single page applications

Alright so I've been writing Backbone.js apps for over a year now and I love the framework model. I've learned how to avoid all the pitfalls and such, but there's one area I'm still quite weak as a single page app developer: how to SEO a public facing app.
I'm working on a blog project, and the easiest solution to my mind is to have a server generated list of all blog entries visible as a link from the /blog section that is rendered on page load, and to ensure that when hitting a /blog/:id url, the server loads the blog content into the very first div on the page, which will be set as display:none.
My question is if this should be sufficient for a good search engine index? SEO is still my weakest skill as a developer. Are there techniques for making sure a search engine crawls this content first and is able to use that content for its more complex indexing?
Also, is there a way to blacklist the generated app content on the page as I know Google has been testing crawling JavaScript apps? In my mind that could never be done at the level it needs to be without some sort of standard browser level event that can be triggered on a full page render or after all data has been loaded.
Anyways, this is more of an ambiguous ticket I know, but it could end up being useful to people in the future if we get a collection of good answers here.
Most of the major search engines (including Google) are rendering the content they receive from the website, in our (Google's) case with something close to a headless browser, so whatever you do for the users the search engines will also get it. Serving different stuff to search engines however will get you into a dangerous area, named cloaking.
Hiding the content with a display:none might backfire on you. We are giving hidden content way less weight in ranking.

AngularJS wrapper for extjs4 application - good design?

I am loading an extjs4(.2.1) application within a div in JSP page. In my JSP i display various links based on what the user is allowed to do. On click of each link the page refreshes and I set some javascript variables (based on server-side logic) which are used by the extjs app as input.
In order to get rid of the page refreshes and therefore improve performance I have refactored this page using AngularJS(I just learnt Angular so I thought I would try using it). I have used routing. So now I get all the inputs for each of the menu clicks at once on page load. When the user clicks on a link the Angular route sets up the appropriate inputs for the extjs application without refreshing the page or going to the server. The extjs application now is present in an IFrame instead of a div as before so the Angular route basically refreshes the IFrame each time a link is clicked to reload the extjs app.
The results seem good. Pages load faster.
My questions :
Is this good design?
I know AngularJs's real power is in data binding and directives which I do not leverage. Is it an overkill to use AngularJs for this usecase?
Is there a better suited library for this specific purpose?
Thanks for your time.
While it may work, other people maintaining the app need to understand the two frameworks and visitors to your site will have to download all that extra code.
It looks like EXTJS 4 has extensions for UI routing ext-ux-router and ExtJS 5 has it built in.
By using a router built into EXT JS you may be able to avoid the iframe reloading hack.

Categories

Resources