I have an single page application built using AngularJS. All requests gets served up the index.html, and from there, Angular takes on the routing and queries a set of API endpoints to get the data to display.
The title and SEO metadata and description for the site is obtained the same way. The catch is that the API endpoint is on a different domain so the SPA is actually doing cross origin requests to get the data.
Everything works fine from a users point of view. However, when google crawls the site, it does not pick up any metadata or title, instead, it just shows the angular tags.
Looking through the site logs, I can see requests with Google bot only doing an OPTIONS request and not following with the actual GET.
How can I get google to index the page properly?
Here is a screenshot of what it is looking like:
The site is https://www.careercontroller.com
Any help would be appreciated.
NOTE: I know I can get this to work by generating static HTML from the server using PhantomJS or something, but I'm looking to get Google to index it properly since according to them, they crawl AngularJS apps just fine.
I have actually got this to work before, except the requests are not cross-domain, so could that be the problem?
This is a common issue that is solved by products like prerender.io
Related
someone helped me with webscraping using tracker.gg's API and puppeteer but since the season change, the API returns this error message
{"errors":[{"code":"CollectorResultStatus::InvalidParameters","message":"One of the provide parameters is invalid.","data":{}}]}
when it used to return an array with all the data needed for the program.
Can anybody help me find the right website for the new season's statistics ?
I'm aware that this post is old but, you can always retrieve the specific endpoints that they retrieve from their own backend systems to the frontend page you're viewing by monitoring the traffic that was requested and filtering it out by 'api' or 'tracker'.
Note that Tracker.gg is against anyone from web scraping their website for the Valorant section (and maybe other games) and may try to actively revoke you from doing so. Check their robots.txt just to be sure.
I personally have used their site to scrape data for my own project and had been booted off as a result thanks to their cloudflare 'anti-bot' detection. An example of an api endpoint I found through this method is https://api.tracker.gg/api/v2/valorant/standard/profile/riot/{userURL}?forceCollect=true
I have a static site, there is no way to add in re-writes via htaccess or similar which is how would normally approach this functionality. We're running the site with Vue, on top of static .html templates eg
\example\index.html
So I can visit www.mywebsite.com/example/ and it'll load the page, and run Vue, when I want a subpage, based on this layout, I currently have to create
\example\subpage\index.html
Again this works great www.mywebsite.com/example/subpage/, but what I'm wanting is to pull data in via an API feed, and be able to have dynamic URLs
\example\subpage\any-page-name-here
The closest I've found is to use # so
\example\subpage#any-page-name-here
Which allows Vue to pick up the data after the # and query the API with that.
Any help would be greatly appreciated, there's no work around for the limitations of the hosting, so I need a Vue/JS/HTML only soltion.
Thanks!
As you cannot change the web server configuration, the only possibilities are the hashtag option or the query string e.g
example.com/site/?dynamic-data
The reason is the web server decides what to do with the request in the first instance, and without any configuration it will simply load a page if it exists or show a 404. This happens before your Vue app is invoked.
I have created an app that generates user api keys using a discourse site. They are returned as an encrypted payload as a parameter within the return url.
Initially I was using WebView which worked fine but wouldn't allow users to login using Google due to security risks and this is a very important part of the app. I need to be able to read the payload parameter and store that in the app for making API calls.
I have tried multiple NativeScript plugins including AdvancedWebView, AwesomeWebView and investigated many others that don't seem to reach my requirements.
Is what i'm asking possible in NativeScript-Vue? If not how would I do it?
I have been looking for Javascript code to access the Google Sites API, but I can't find anything definite.
All I want to do is be able to take the contents of a page I made on Google Sites and display them on another website I own.
Is this possible, and if so, are there tutorials or example code available?
You can retrieve contents of Google Sites via API:
https://developers.google.com/google-apps/sites/docs/1.0/developers_guide_protocol#ContentFeedGET
You could create a PHP/Python script on your own server to execute the API commands on demand and return the result. Via JavaScript/AJAX you could access it local on your server, without cross origin problems.
I have a few single page web apps on multiple domains that heavily rely on javascript/ajax to fetch and show content. Based on logs and search results I can tell that googlebot runs javascript on some of the domains but not on others. On some it indexes everything thats only available with js on others it doesn't even seem to run js at all.
Can anybody tell me how googlebot decides what js to run and if I can to anything to get it to run js on my other domains?
PS: I know that normally I should use something like serverside rendering for this, but I'm not at all depended on search results and rankings, so its not really worth the effort. I'm just curious how googlebot decides whether it should run js or not and if there's anything easy I can do to change that on my other domains.
You can learn more about how Google render ajax based website and a list of best practice directly from Google developer website here:
https://webmasters.googleblog.com/2014/10/updating-our-technical-webmaster.html
https://developers.google.com/webmasters/ajax-crawling/
Regarding your specific problem as first thing, I suggest you to analyse each domain using Google Webmaster tool with functionality "Fetch as Google" and go trough every technical aspects mentioned in Google guide.
https://support.google.com/webmasters/answer/158587?hl=en
I think Google Updated Research on the Subject
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
Now the functionality to fetch your page by Google Bot and see the results has moved into Google Search Console.
You can use URL Inspection Tool to analyze your live URL.
I've tested it on AngularJS App and Google Bot was able to crawl page content with data fetched from AJAX request.
One very important restriction is that the Googlebot does not allow AJAX requests while the page is loaded.
In my blog post I am explaining how to adapt a Single Page Application so that it becomes crawlable – without the need to render HTML snapshots on the server.