Get value of Ace.js editor in golang - javascript

I'm here again to clear something. I got a html page running with golang with an embed Ace.js editor and what I look for is to get the editor's string content value and play with it inside golang to analize it. The question is, how I do that? I previously worked it on javascript and it was as easy to do as this:
let editor = ace.edit(0);
let value_Txt = editor.getSession().getValue();
I will appreciate your help since I recently started working on golang

I guess that Golang, in this case, is only responsible for rendering the page. From that moment onwards, Ace.js will run in the client's browser.
You won't get the content of the page until you submit it through an API (e.g. using POST) and do something with it. What you were doing in JS before still applies now.

Related

How the get the whole source html code of a webpage that is generated by Javascript using Java / Webdriver?

I am a newbie in programming and I have a task here I need to solve. I am trying to get the html source code of a webpage using Java / Webdriver method getPageSource(). Problem is, that page is somehow generated, probably by javascript, so the result I get is html code containing just page skeleton - a table that is empty, not filled by data. But, there is tag like <script type="text/javascript" src="/x/js/main.c0e805a3.js"></script> in the very bottom of that html code.
The question is, how can I force Webdriver to run that Javascript and give me the result - the whole source html with data. I already tried to use this (js.executeScript("window.location = '/x/js/main.c0e805a3.js'");) before calling getPageSource() but not successful.
Any help will be appreciated, thanks!
There are quite a few setups, now, that can run the Java-Script on a web-page. The most well known, I think, is likely Selenium since I think it has been around for a while. Others include karate, Puppeteer, and even an old tool called Rhino. Puppeteer is a Google, Inc. project that uses Java-Script (server-side Java-Script, called Node.js. They don't like us comparing, contrasting libraries here.
I haven't had the time to engage Selenium, yet, but I write HTML parser, search and update code all the time. If your only goal is to load a page whose contents are dynamically "filled in by AJAX calls" - and what I mean by that, you only want the contents of an HTML that would normally see when you visit the sites web-page, and you are not concerned with button presses then the one I have been using for that is called Splash This tool does have the ability to let you invoke Java-Script, but if all you want to do is see the JS on a page dynamically load the table, then, literally, all you have to do is start-the tool, and add one line to your program.
On Google Cloud Platform, these 2 lines will start a Splash Proxy Server. If you are writing your code on AWS (Amazon) or Azure (Microsoft), it would likely be similar. If you are running your code in an office on the local machine, you would have to research how to start it.
Install Docker. Make sure Docker version >= 17 is installed.
Pull the image:
$ sudo docker pull scrapinghub/splash
Start the container:
$ sudo docker run -it -p 8050:8050 --rm scrapinghub/splash
Then, in your code, all you have to do is the following:
// If your original code looked like this:
URL url = new URL("https://en.wikipedia.org/wiki/Christopher_Columbus");
HttpURLConnection con = (HttpURLConnection) url.openConnection();
con.setRequestMethod("GET");
con.setRequestProperty("User-Agent", USER_AGENT);
return new BufferedReader(new InputStreamReader(con.getInputStream()));
Change the first line of code in this example to this, and (theoretically), and dynamically loaded HTML tables that are completed with the onload page events will be automatically loaded before returning the HTML page.
// Add this line to your methods
String splashProxy = "http://localhost:8050/render.html?url=";
URL url = new URL(splashProxy + "https://en.wikipedia.org/wiki/Christopher_Columbus");
For most web-sites, any initial tables that are filled by JS/jQuery/AJAX will be filled in. If you are willing to learn teh Lua Programming Language, you can also start invoking the methods there. It has been pretty convenient for my purposes, since I am not writing web-page testing code (code that simulates user button presses). If that is what you are doing, Selenium is likely worth spending time learning / studying the A.P.I.

Running Javascript from Java (but need jQuery, DOM and ajax)

I'm building a web app with Diagram in it.
For the diagram I used jsPlumb library
Link: https://jsplumbtoolkit.com/
One of my requirement is to make Blocks inside the Diagram like flowchart.
And one type of the blocks is a able to be inserted by customizable script.
So, when user double-clicked that block, they can input Javascript code in the textarea and will be executed later.
Here lies my problem :
I able to run the script code very well, when it is still on the front-end side (jsp) using browser using "new Function" from Javascript.
But after the block is saved, the script saved to DB.
And if I need to run it again, then it will be executed from back-end (Java).
Therefore, I used ScriptEngine to run the Javascript.
The problem is ajax, $-sign, console, etc are not recognized from Java.
And I found out later, that ScriptEngine did not support for those kind of things.
So, I wonder is there is any possible way to make these possible ?
I'm open to other alternative idea.
Thank you
Use HtmlUnit Java Library
Sample code
final WebClient webClient = new WebClient(BrowserVersion.CHROME);
final HtmlPage page = webClient.getPage("http://127.0.0.1:9090/mysite");
page.executeJavaScript("$");
webClient.close();
This http://127.0.0.1:9090/mysite can be your local website which already has jquery
You can also inject a script tag on the fly in a blank html page
If your are behind a proxy then
final WebClient webClient = new WebClient(BrowserVersion.CHROME, "proxyhost", 8080);
Alternate Idea
Use Jaunt - Java Web Scraping & JSON Querying Java Library
Sample code and full documentation available in the site

PHP HttpRequest to create a web page - how to handle long response times?

I am currently using javascript and XMLHttpRequest on a static html page to create a view of a record in Zotero. This works nicely except for one thing: The page html title.
I can of course also change the <title>...</title> tag, but if someone wants to post the view to for example facebook the static title on the web page will be shown there.
I can't think of any way to fix this with just a static page with javascript. I believe I need a dynamically created page from a server that does something similar to XMLHttpRequest.
For PHP there is HTTPRequest. Now to the problem. In the javascript version I can use asynchronous calls. With PHP I think I need synchronous calls. Is that something to worry about?
Is there perhaps some other way to handle this that I am not aware of?
UPDATE: It looks like those trying to answer are not at all familiar with Zotero. I should have been more clear. Zotero is a reference db located at http://zotero.org/. It has an API that can be used through XMLHttpRequest (which is what I said above).
Now I can not use that in my scenario which I described above. So I want to call the Zotero server from my server instead. (Through PHP or something else.)
(If you are not familiar with the concepts it might be hard to understand and answer the question. Of course.)
UPDATE 2: For those interested in how Facebook scraps an URL you post there, please test here: https://developers.facebook.com/tools/debug
As you can see by testing there no javascript is run.
Sorry, im not sure if i understand what you are trying to ask, are you just wanting to change the pages title?
Why not use javascript?
document.title = newTitle
Facebook expects the title (or opengraph :title tags) to be present when it fetches the page. It won't execyte any JavaScript for you to fill in the blanks.
A cool workaround would be to detect the Facebook scraper with PHP by parsing the User Agent string, and serving a version of the page with the information already filled in by PHP instead of JavaScript.
As far as I know, the Facebook scraper uses this header for User Agent: "facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)"
You can check to see if part of that string is present in the header and load the page accordingly.
if (strpos($_SERVER['HTTP_USER_AGENT'], 'facebookexternalhit') !== false)
{
//synchronously load the title and opengraph tags here.
}
else
{
//load the page normally
}

take content from WordPress page and deliver it to HTML via ajax

I have the following problem:
HTML blank page on server 1.
WordPress site on server 2.
What I need is to call the content from www.wordpress.site/sample-page/ to HTML page on server 1, but not the entire page, only the part that I can edit from wp-admin; so without header and footer.
Also, I don't know if there is any other method, but I need it to be done via JavaScript/jQuery or Ajax.
I've used Google, but is hard to get a tutorial for this, I've tried a lot of tutorials, but none is what I need, and I don't know that much JavaScript to make it work.
SO, can someone help me please?
BIG Thanks!
Andrei
L.E.:
I've found this working: http://jsfiddle.net/mdawaffe/hLWdH/
It is working as it is written, if I try to change the domain with mine, will not work.
What script do I have to implement on the server from which the content is called (taken)?
For more information, as you asked:
I have a HTML + CSS + JS template that I will use with phonegap (if you don't know about it, try it, it's very useful) to create a mobile app for Android, iOS, and BlackBerry.
Now, I have this site: m.trafficvoice.ro (I hope I can post links here).
In the 'live stream' page (it's called services.html), I have a HTML5 audio tag/player.
What I need, is to get from www.trafficvoice.ro/whatever-the-name-page, the content, but only the part that I can edit in WordPress (so without header and footer).
Why? Because in the future there will be more stream to add, and maybe some of them will be down due to unknown reason, so I need to update that page, without making an update for the entire app, upload it to the store, wait for approval, the client to download it, etc.
Big thanks!
Andrei
Could you just use an iframe instead? You could modify a template in your theme to not display header/footer and then use that in the iframe.

Django-tinymce not working; Getting a normal textarea instead

I'm trying to use django-tinymce to make fields that are editable through Django's admin with a TinyMCE field. I am using tinymce.models.HTMLField as the field for this.
The problem is it's not working. I get a normal textarea. I check the HTML source, and it seems like all the code needed for TinyMCE is there. I also confirmed that the statically-served JavaScript file is indeed being served. But for some reason it isn't working.
What I did notice though, is that if I avoid setting TINYMCE_COMPRESSOR = True in the settings file, it does start to work. What can cause this behavior?
What are your webserver and web browser. Perhaps it is trying to set the gzip/bzip header and the server isn't processing it... so it goes out plaintext but the client expects compressed?

Categories

Resources