Curl to get the contents of an angular page - javascript

I am working on an angular 2 application and we need to use Google Earth to run this app. Unfortunately, Google earth uses a much ancient version of Chrome which does not know anything about Angular 2. A mechanism is needed to run the angular 2 app on server and send the initial html response with angular already executed/bootstrapped to this browser.
I am thinking of creating a PHP server which communicates with the Google earth browser. So essentially, Google Earth will request pages from this PHP server. This PHP server will make CURL requests to fetch corresponding pages from the Angular2 application and return the HTML back to the Google Earth browser.
But, there is a catch. Curl does get the response from angular apps but does not wait for angular app to finish bootstrapping, which means before ui-view is filled by the router content, CURL renders its response which happens to be useless in this case I do not get any useful HTML back.
I used this link to check the CURL responses : http://onlinecurl.com/
You can pick up any angular site and use this link to see the responses it give away.
Is there a way with which curl can wait till angular bootstraps and then return the HTML? or is there any any other way to solve this problem?
I have tried angular-universal but it seems too complicated to implement and I have short time to fix this issue.
All solutions welcome. Thanks in advance.

Angular breaks the Semantic Web, Tor in max security mode, and any other sort of compatibility short of a rendering web browser. So I recommend avoiding Angular or if that's not an options create a faster, smaller more compatible mirror of you site that serves one html.gz file per page. Use something like Selenium IDE because "Angular Universal" has no support for window/document/jquery/etc and phantomjs & nightmare use old buggy code. A 2,361.07 KB example Angular page can be reduced to a 9.7 KB flat file. Another option is if you only want the data and you use or can find the XHR JSON file that has the body HTML and just get that. curl -sL 'https://api.ontario.ca/api/drupal/page%2Fjobs-and-prosperity-fund-new-economy-stream?fields=body' > body.json for the above example

Related

File Protocol Ajax Request

I have made a project whose structure is like this
When I run my index.html from firefox, it is working fine.
But I open it with my chrome it is giving CORS error.
Now my problem is that chrome doesnot support file:// protocol ajax request & as I have distribute to my project to others I don't want other to run on firefox only.
Internally I am using ajax call in project, to load the resources. Can Somebody suggest how to bypass that ajax call to load resources?? Is there some solution or any third party js which can help me.
Note: Please don't suggest to use XAMPP, apache etc where I can put my project and run as localhost for chrome as I don't want user to force to download these to run my project. Please give other useful solution where I can do some change in code & it works for everyone.
Here are the links from which you could understand my problem better.
Ajax in Jquery does not work from local file
AJAX code do not run locally
AJAX request using jQuery does not work
Embed the data directly into the JavaScript or HTML and read it from there.
The data isn't going to be changing based on user input or the contents of a database, so having it in a separate "http" resource doesn't bring huge benefits.
If you want to store the data in XML to make it easier to edit in your development environment, then write a build tool to bundle it up into an embedded format before distributing.

AngularJs contents not visible in View source

I have a site which was coded by AngularJs, It has SEO problem.
Noe I used ANGULARJS SEO using PhantomJS.
I followed these full steps, But at the last it runs successfully upto this command
$ phantomjs --disk-cache=no angular-seo-server.js [port] [URL prefix]
But Now also my angularjs code only visible in view source.
I cant get the contents in view source.
Also i want to know the Equivalent word of CURL in ASP.NET
Any help is Appreciated.
Configuring for serving pre-rendered HTML from Javascript pages/apps using the Prerender Service will resolve your issue..
As you are using PhantomJS integrated with AngularJS for your app, we can have the following reference,
https://github.com/prerender/prerender
.htaccess also helps the DOM to pre-render HTML from Javascript, which has been referenced in the following link..
How to make a SPA SEO crawlable?
https://gist.github.com/Stanback/7028309
https://gist.github.com/thoop/8072354
Hope this gives you more info as I resolved above..

Spiderable package working very sporadically due to fonts from typography.com [UPDATE]

Update
OK, I've tracked down the error! I'm using fonts from http://www.typography.com/ and if I remove the link to the fonts from the <head> (or even put it in the body instead) the site is fetched correctly every time!
Summary: If you're using webfonts which are loaded from a remote domain (with some kind of license approval process taking place as well) then the spiderable package will break!
The original question:
So I got this simple site built using meteor.js. It's on Digital Ocean, deployed using meteor up (with phantomjs enabled) and it's using the spiderable package.
Here's the site, it's a simple portfolio.
Now when I for exampel do curl http://portfolio.new-doc.com/?_escaped_fragment_= it will first return an empty body (classic meteor-without-spiderable-behaviour), but if I do the same curl within a few seconds it returns the correct result. (The same is true if I curl localhost:3000 on my machine).
So first the spiderable package does not do it's thing, and then it does. It kind of feels like on the first curl it returns the empty site (but loads up all the publiscations/subscriptions on the server) and on the second curl it uses the now loaded subscriptions and returns the correct result.
This is also true for Google webmaster tools. My first fetch as google bot returns an empty body, and the second one (if made quickly after the first one) returns the correct page.
The site only has one publish and one subscription. The publish either returns one or more pages from a subscription or runs this.stop(). The subscription is set up in a waitOn function in the app's only iron-router route. No complicated stuff here.
Since the curl command returns the correct result sometimes I don't think the error is in the publish/subs?
I've gotten the spiderable package to work in the past, but I've also spent a lot of time battling it!
Quite frustrating.
Any ideas? Thanks!
Update
OK, I've tracked down the error! I'm using fonts from http://www.typography.com/ and if I remove the link to the fonts from the (or even put it in the body instead) the site is fetched correctly every time!
Summary: If you're using webfonts which are loaded from a remote domain (with some kind of license approval process taking place as well) then the spiderable package will break!

How to deploy WaveMaker project without runtimeLoader.js

WaveMaker is a powerful ajax based UI builder, but its JSON-RPC API standard is incompatible with our web service, which only has a RESTful API. As a result, we would like to design an UI without using any service using WaveMaker, and only extract part of its source code that runs on browser side (discarding all services)
Unfortunately, we can neither view or test the extracted code (all .html files show an empty page), a javascript reference in index.html is pointing to runtimeLoader.js, which we cannot find anywhere. So, is it possible to deploy the browser side code on a web container (not an application container like Tomcat) without runtimeLoader.js? If this is not possible, how do I change the source code so it can be tested without using WaveMaker?
If you don't mind having a java server in the mix, you could "import" REST calls to your API into the application. The XHR service (new in 6.5) targets JSON returning services. The 'Build-a-Service' does best with XML returning services. The browser would then call the WM java server, which in turn calls your REST services.
An easy way to get started with a WaveMaker client only app is to use the phonegap build option. This will build a zip file of a stand alone app. If you unzip that into say an apache served folder, you will render pages, etc. Note this build is targeted towards mobile devices via phonegap, so you will want to make adjustments if you are targeting desktop browsers.
Also, runtimeLoader.js can be found in the client runtime lib folder. e.g. /studio/lib/runtimeLoader.js of the installation.

Using phoneback -- fetching from xml server not working correctly?

I'm using phonegap framework with javascript stuff. I also use backbone.js. The problem is that when I try to fetch a data from a server via Backbone.Collection.fetch() routine with valid url, I get an error meaning that the xml wasn't fetched. Any idea on how to solve this? Btw, if I run this on eclipse as Web Application, it works since eclipse uses its own internal server, I'm wondering if phonegap would do something similar as well? Thanks.
PhoneGap does not do anything to change how XMLHttpRequests are made though it does run on the file:// URI scheme and will return an HTTP status code of 0, which some libraries may not check for. You can see a simple PhoneGap Ajax example here: https://github.com/phonegap/phonegap-samples/blob/master/ajax/index.html

Categories

Resources