I have a site which was coded by AngularJs, It has SEO problem.
Noe I used ANGULARJS SEO using PhantomJS.
I followed these full steps, But at the last it runs successfully upto this command
$ phantomjs --disk-cache=no angular-seo-server.js [port] [URL prefix]
But Now also my angularjs code only visible in view source.
I cant get the contents in view source.
Also i want to know the Equivalent word of CURL in ASP.NET
Any help is Appreciated.
Configuring for serving pre-rendered HTML from Javascript pages/apps using the Prerender Service will resolve your issue..
As you are using PhantomJS integrated with AngularJS for your app, we can have the following reference,
https://github.com/prerender/prerender
.htaccess also helps the DOM to pre-render HTML from Javascript, which has been referenced in the following link..
How to make a SPA SEO crawlable?
https://gist.github.com/Stanback/7028309
https://gist.github.com/thoop/8072354
Hope this gives you more info as I resolved above..
Related
I'm working on a web project using Grails 2.3.7.
Every time I change any Javascript file and refresh the page, I've get direct to a blank page with the following message:
"Resources are being processed, please wait..."
After some research, I discover that have something to do with the "DevModeSanityFilter.groovy".
Can anyone give a help so a can change the JS file and see the changes without this message and without restarting the server?
Remove the abandoned and deprecated resources plugin from your BuildConfig.groovy and use the asset-pipeline plugin instead. asset-pipeline uses a much better approach to resource handling and is actively developed.
I am working on an angular 2 application and we need to use Google Earth to run this app. Unfortunately, Google earth uses a much ancient version of Chrome which does not know anything about Angular 2. A mechanism is needed to run the angular 2 app on server and send the initial html response with angular already executed/bootstrapped to this browser.
I am thinking of creating a PHP server which communicates with the Google earth browser. So essentially, Google Earth will request pages from this PHP server. This PHP server will make CURL requests to fetch corresponding pages from the Angular2 application and return the HTML back to the Google Earth browser.
But, there is a catch. Curl does get the response from angular apps but does not wait for angular app to finish bootstrapping, which means before ui-view is filled by the router content, CURL renders its response which happens to be useless in this case I do not get any useful HTML back.
I used this link to check the CURL responses : http://onlinecurl.com/
You can pick up any angular site and use this link to see the responses it give away.
Is there a way with which curl can wait till angular bootstraps and then return the HTML? or is there any any other way to solve this problem?
I have tried angular-universal but it seems too complicated to implement and I have short time to fix this issue.
All solutions welcome. Thanks in advance.
Angular breaks the Semantic Web, Tor in max security mode, and any other sort of compatibility short of a rendering web browser. So I recommend avoiding Angular or if that's not an options create a faster, smaller more compatible mirror of you site that serves one html.gz file per page. Use something like Selenium IDE because "Angular Universal" has no support for window/document/jquery/etc and phantomjs & nightmare use old buggy code. A 2,361.07 KB example Angular page can be reduced to a 9.7 KB flat file. Another option is if you only want the data and you use or can find the XHR JSON file that has the body HTML and just get that. curl -sL 'https://api.ontario.ca/api/drupal/page%2Fjobs-and-prosperity-fund-new-economy-stream?fields=body' > body.json for the above example
Title pretty much says it all. Your help is greatly appreciated. I'm so happy that I just got the JavaScript to show up in the first place, now only if it actually updated when I changed it, I can actually starting using JavaScript effectively in my portal project.
Help is MUCH appreciated!!!
If you running your project from JDeveloper directly its supported just out of box, change your files and reload. Don't forget about browser cache.
Use OHS in front of your weblogic server and setup location for serving static files. Update as you do it with regular web server.
Quick and dirty (only for debugging/development stage). Embed your script in adf page using <af:resource> component.
Essentially, I want to create a personal website that functions like this one:
https://sublime.wbond.net/packages/Jade
Whereby it's contained within one HTML page and clicking on a nav item will only load the required information.
Looking at the javascript code I believe the developer is using Backbone.js and Handlebars.js. I think they used PHP for the backend.
There is a key functionality that I'm after that is within this site. Essentially, when you are at the aforementioned directory, and then you change to https://sublime.wbond.net/docs, there will be an AJAX request for only the HTML that's needed and then it is appended to the current page.
Having written a simple backbone app by following a tutorial, it seems it's done differently. Hosting the app using node, it will load all of the content. When you go to another directory, it still loads all the content and then backbone will append the right piece based on the URL. I can see this being useful for certain kinds of apps, but I don't want that functionality. I looked into it more and I thought about using the fetch() functionality in backbone, but I'm not too sure he's using that either.
It appears like he's doing something like Rendr by Airbnb. I can't really use that because there the documentation is not sufficient right now.
It looks like when you call a page it just gives you the HTML all ready without the need to compile it locally. Is there something I'm missing here in terms of utilizing backbone or is this just some tool he's made to handle this?
If you are not afraid spending hours in front of videos, those excellent screencasts could get you started : the guy explains how to build a Single-Page app using Backbone and Marionette, from scratch.
This web site is not using Backbone, and the solution he uses is a mixte of full html page load and JSon call, look at this links :
https://sublime.wbond.net/browse.json
https://sublime.wbond.net/search.json
https://sublime.wbond.net/docs.html
https://sublime.wbond.net/news.html
https://sublime.wbond.net/stats.json
The simplest way to have the same behavior as wbond.net will be to change the way you render the page on the backend. You need to check if request is XHR and render only content, without layout. On the frontend you need to bind click event to each links which will send AJAX request on binded URL and put whole response in the page content area (jQuery's $.get() method).
I have developed a AngularJS Application together with a Parse.com backend (only data, no business logic). They communicate over REST.
Now my problem is, that I would like to get my page indexed by google. To reach that, I somehow have to serve all my content as static pages to make sure, Google can index it.
Now I found a nice service called getseojs.com, which does nothing else than serving all contents of my website as static content.
All i had to adjust on my side was to add a Rewrite-Condition and Rule into my .htaccess file which does nothing else than forwarding all calls containing a "_escaped_fragment_=" to the getSEOjs Service.
My only problem is, that my links arent working in the static version.
The reason is quite simple:
The URL of my AngularJS application is something like www.mydomain.com/app/
Now my links look like this:
Sample Content which is working fine in normal Browsers.
The problem is, that in the static content the domain is different. It is something like:
http://getseojs.com/v2/sdfxsaa2/://www.mydomain.com:80/app/?_escaped_fragment_=/sample/content
for the same Sample Content site. When I click on a link on the static site, I get redirected to something like:
http://getseojs.com/v2/sdfxsaa2/://www.mydomain.com:80/app/?_escaped_fragment_=/sample/content#!/othercontent
instead of
http://getseojs.com/v2/sdfxsaa2/://www.mydomain.com:80/app/?_escaped_fragment_=/othercontent
Is there any way I can avoid this? Is there no other way than working with absolute URLS? But also then I got a problem, because I need the /app/ part (cause this is were my website is placed in) and between the app part and the routes I need the hashbang (#!) oder in case of googlebot the part with "?_escaped_fragment_=".
I hope someone of you can help me. I have no idea how to solve this issue.
Thanks a lot.
Greets
Marc