HTML Javascript webpage content reading into a var - javascript

Lets say I have this weblink: http://checkip.amazonaws.com/ . I've made a java program (actual program. not webpage) that reads the content of this webpage (eg. "25.25.25.25") and displays it in a jLabel (Using Netbeans IDE 1.7.3) and it works.
Now how can I read the contents of this same webpage (eg. "25.25.25.25") and display it as normal text on a webpage (The final webpage must be .html not .php or what ever)?
I dont mind any script whether is html or javascript or anything, I just please need it to work so that when the webpage is opened it can read something like:
"Your IP: 25.25.25.25"
Preferably reading the contents of http://checkip.amazonaws.com/ into
<script>var ip = needCodeHere</script>
If I can get the IP into a var or read the contents of that webpage into a var I'm happy but other code is happy to as long as it works.
Please help :( been staring at google for days and cant find a solution)

You'll need 3 files (in the same directory) to do that. A HTML file to show the ip, a PHP file to get that ip via curl, and a JS file to connect the html and the php. It would be simpler if the "final webpage" could be the ip.php itself, but let's do it this way:
1) ip.html (the "final webpage")
<script src="//code.jquery.com/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="ip.js"></script>
<div id="ip"></div>
2) ip.php
<?php
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'http://checkip.amazonaws.com');
$result = curl_exec($curl);
curl_close($curl);
?>
3) ip.js
$.ajax({
url: 'ip.php',
type: "GET",
success: function (data) {
$('#ip').html("Your IP: " + data);
}
});
Let me know if you need more explanation.

Related

Take screenshot from external website

I am developing a startpage where users can add links to the page by using a formular. They can add name, url, description and upload an image.
I want to automate the process of uploading an image, the image should be captured automatically. My script should take a screenshot of the website which the user entered in url. I know I can take screenshots of html elements by using html2canvas.
Approach 1
My first approach was to load the external website to an iframe, but this does not work because some pages are restricting this, e.g. even the iframe tutorial on w3schools.com does not work and I get Refused to display 'https://www.w3schools.com/' in a frame because it set 'X-Frame-Options' to 'sameorigin'.
HTML
<div id="capture" style="padding: 10px; color: black;">
<iframe src="https://www.w3schools.com"></iframe>
</div>
Approach 2
My next approach was to make a call to my webserver, which loads the target website and returns the html to the client. This works, but the target site is not getting rendered properly, e.g. images are not loading. (see screenshot below)
HTML
<div id="capture" style="padding: 10px; color: black;"></div>
JS
var testURL = "http://www.google.de";
$.ajax({
url: "http://server/ajax.php",
method: "POST",
data: { url: testURL},
success: function(response) {
$("#capture").html(response);
console.log(response);
html2canvas(document.querySelector("#capture")).then(
canvas => {
document.body.appendChild(canvas);
}
);
}
});
PHP
if (!empty($_POST['url'])) {
$url = filter_input(INPUT_POST, "url");
}
$c = curl_init($url);
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
//curl_setopt(... other options you want...)
$html = curl_exec($c);
if (curl_error($c))
die(curl_error($c));
// Get the status code
$status = curl_getinfo($c, CURLINFO_HTTP_CODE);
curl_close($c);
echo $html;
Is it possible to achieve this?
Update
I managed to load some pictures by changing my ajax, but they are not rendered by html2canvas.??
var testURL = "http://www.google.de";
$.ajax({
url: "http://server/ajax.php",
method: "POST",
data: { url: testURL},
success: function(response) {
response = response.replace(/href="\//g, 'href="'+testURL +"/");
response = response.replace(/src="\//g, 'src="'+testURL +"/");
response = response.replace(/content="\//g, 'content="'+testURL +"/");
$("#capture").html(response);
console.log(response);
html2canvas(document.querySelector("#capture")).then(
canvas => {
document.body.appendChild(canvas);
}
);
}
});
Result
Result Canvas
I love php, but for screenshots I found that using phantomjs provide the best results
Example file screenshot.js
var page = require('webpage').create();
page.open('https://stackoverflow.com/', function() {
page.render('out.png');
phantom.exit();
});
Then from the shell:
phantomjs screenshot.js
Or from php:
exec("phantomjs screenshot.js &");
The goal here is to generate the js file from php.
Result in a file called out.png in the same folder. This is a full height page screenshot.
Example output
We can also take good captures with Firefox from the command line. This require X anyway.
firefox -screenshot test.png http://www.google.de --window-size=1280,1000
Example output
Not in pure php. Nowadays major number of sites generates content dynamically with js. It can be rendered only by browsers, but good news - there is something called phantomjs - browser without UI. It can do job for You, even they have working example in their tutorials which I succesfully implemented few years ago with small knowledge of javascript.
There is alternative library called a nightmarejs - I know this only from friends opinion which says that it's simpler than phantom, but I won't guarantee to You that it won't be a nightmare - personally I hadn't use it.
It is possible, but if you want an screenshot you need something like a browser that render the page for you. The iframe approach go in that way. But iframe is the page itself. If you want a .jpg , .png or something like that, the best way in my opinion is using wkhtmltoimage. https://wkhtmltopdf.org/.
The idea is that you install Qt WebKit rendering engine in your server, just as you install a browser in your server, this render the page and save the final result in a file. When some user submit a url, you pass it as argument to wkhtmltopdf then you could have an image of that url. The basic use could be somethig like
wkhtmltoimage http://www.example1.com /var/www/pages/example1.jpg
you should run that statement in bash, from php could be:
<?php
exec('wkhtmltoimage http://www.example1.com /var/www/pages/example1.jpg');
?>
Keep in mind that wkhtmltoimage execute css, javascript.., everything. Just like browser.

Cross-Domain Rss Feed Request?

Ok, so for about a week now I've been doing tons of research on making xmlhttprequests to servers and have learned a lot about CORS, ajax/jquery request, google feed api, and I am still completely lost.
The Goal:
There are 2 sites in the picture, both I have access to, the first one is a wordpress site which has the rss feed and the other is my localhost site running off of xampp (soon to be a published site when I'm done). I am trying to get the rss feed from the wordpress site and display it on my localhost site.
The Issue:
I run into the infamous Access-Control-Allow-Origin error in the console and I know that I can fix that by setting it in the .htaccess file of the website but there are online aggregators that are able to just read and display it when I give them the link. So I don't really know what those sites are doing that I'm not, and what is the best way to achieve this without posing any easy security threats to both sites.
I highly prefer not to have to use any third party plugins to do this, I would like to aggregate the feed through my own code as I have done for an rss feed on the localhost site, but if I have to I will.
UPDATE:
I've made HUGE progress with learning php and have finally got a working bit of code that will allow me to download the feed files from their various sources, as well as being able to store them in cache files on the server. What I have done is set an AJAX request behind some buttons on my site which switches between the rss feeds. The AJAX request POSTs a JSON encoded array containing some data to my php file, which then downloads the requested feed via cURL (http_get_contents copied from a Github dev as I don't know how to use cURL yet) link and stores it in a md5 encoded cache file, then it filters what I need from the data and sends it back to the front end. However, I have two more questions... (Its funny how that works, getting one answer and ending up with two more questions).
Question #1: Where should I store both the cache files and the php files on the server? I heard that you are supposed to store them below the root but I am not sure how to access them that way.
Question #2: When I look at the source of the site through the browser as I click the buttons which send an ajax request to the php file, the php file is visibly downloaded to the list of source files but also it downloads more and more copies of the php file as you click the buttons, is there a way to prevent this? I may have to implement another method to get this working.
Here is my working php:
//cURL http_get_contents declaration
<?php
function http_get_contents($url, $opts = array()) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_USERAGENT, "{$_SERVER['SERVER_NAME']}");
curl_setopt($ch, CURLOPT_URL, $url);
if (is_array($opts) && $opts) {
foreach ($opts as $key => $val) {
curl_setopt($ch, $key, $val);
}
}
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
if (false === ($retval = curl_exec($ch))) {
die(curl_error($ch));
} else {
return $retval;
}
}
//receive and decode $_POSTed array
$post = json_decode($_POST['jsonString'], true);
$url = $post[0];
$xmn = $post[1]; //starting item index number (i.e. to return 3 items from the feed, starting with the 5th one)
$xmx = $xmn + 3; //max number (so three in total to be returned)
$cache = '/tmp/' . md5($url) . '.html';
$cacheint = 0; //this is how I set if the feed will be downloaded from the site it is from, or if it will be read from the cache file, I will implement a way to check if there is a newer version of the file on the other site in the future
//if the cache file doesn't exist, download feed and write contents to cache file
if(!file_exists($cache) || ((time() - filemtime($cache)) > 3600 * $cacheint)) {
$feed_content = http_get_contents($url);
if($feed_content = http_get_contents($url)) {
$fp = fopen($cache, 'w');
fwrite($fp, $feed_content);
fclose($fp);
}
}
//parse and echo results
$content = file_get_contents($cache);
$x = new SimpleXmlElement($content);
$item = $x->channel->item;
echo '<tr>';
for($i = $xmn; $i < $xmx; $i++) {
echo '<td class="item"><p class="title clear">' .
$item[$i]->title .
'</p><p class="desc">' .
$desc=substr($item[$i]->description, 0, 250) .
'... <a href="' .
$item[$i]->link .
'" target="_blank">more</a></p><p class="date">' .
$item[$i]->pubDate .
'</p></td>';
}
echo '</tr>';
?>

How to know if php file is being called within a src="..."?

In my website I have
<script src="js.php"></script>
Question is very simple but I have no idea of the answer:
Within js.php, how can I check if the file has been called though a script src="..."?
Purpose is to change the returned HTML code of js.php depending on how this php script file is called (direct access or script src="...").
The way to do it would be to assign a session variable to true right before you call the js.php file
session_start();
$_SESSION['src'] = true;
<script src="js.php"></script>
Then in the php file
session_start();
if(isset($_SESSION['src']) && $_SESSION['src'] == true) {
// file was called from a src
$_SESSION['src'] = false; // this is important so that it can't be called from direct access
}
Cool question. Let me help ya.
I'll provide here some not 100%-reliable methods, that will work in standard, non-user-malicious cases.
First
For this solution you will be required to download mimeparser from here. It's your choice what kind of mimeparser you want to use, I found this just ad-hoc for purpose of this answer.
Theory
In theory browser is sending headers, that your script during response should match for proper browser-side parsing. Especially I have here in mind HTTP_ACCEPT header.
Code example
Once you have downloaded mimeparser, lets start with creating file test.php:
<?php // test.php
//https://code.google.com/p/mimeparse/
include_once('mimeparse.php');
$mimeMatch = Mimeparse::best_match(array('text/javascript', 'text/css', 'text/html', 'application/xhtml+xml', 'application/xml', 'image/*'), $_SERVER['HTTP_ACCEPT']);
switch($mimeMatch) {
case 'text/javascript': // via <script src>
echo('alert("this is loaded as script");');
break;
case 'image/*': // via <image src>
header('Location: http://i.stack.imgur.com/sOq8x.jpg?s=128&g=1');
break;
case 'text/css': // via <link href>
echo('body::before{content: "this is written via CSS"}');
break;
default:
var_dump('detected standard file request by matching to ' . $mimeMatch);
// if __FILE__ is first on a list, its not included
if(__FILE__ !== array_shift(get_included_files())) {
var_dump('file was included or required');
} else {
var_dump('file runs on its own');
}
// additional detect for ajax request.
if(!empty($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest') {
var_dump('loaded via AJAX request');
} else {
var_dump('loaded via not-AJAX request');
}
break;
}
die();
You can visit it by now, to see that script detects, its loaded directly:
string 'detected standard file request by matching to text/html' (length=55)
string 'file runs on its own' (length=20)
string 'loaded via not-AJAX request' (length=27)
Inclusion - feature showdown
To see, whats happening with script in some special cases, you can create an example index.php:
<html>
<head>
<link rel="stylesheet" type="text/css" href="test.php"/>
</head>
<body>
<script src="test.php"></script>
<img src="test.php"></img>
<?php require('test.php'); ?>
Description
By parsing some standard-behavior headers sent from browser, we can predict loosely, what was context of page load. It's not 100% reliable and not a very good practice, but great for writing rootkits ;) anyway.
Hopefully rest is commented-out in PHP code.
Tested with Apache serving and Chrome reading.

External JS file not loading after submit

I have a js file in domain1 that is working fine in domain1. But if I connect the js (from domain1) to domain2, it is not working.
The js file is a connection to a PHP file in domain1 to output some results. How can i make it work in domain2?
[I want to make the js work from the domain1 itself]
Here the js file in domain1,
function sendQuery() {
var container = $('#DisplayDiv');
$(container).html('<img src="http://www.domain1.com/loading.gif">');
var newhtml = '';
$.ajax({
type: 'POST',
url: 'http://www.www.domain1.com/data.php',
data: $('#SubmitForm').serialize(),
success: function (response) {
$('#DisplayDiv').html(response);
}
});
return false;
}
It is working till loading.gif file, but no data is output from the external output.php file from domain2.
[Here domain1 & domain2 are used only as examples]
WORKING FINE NOW!!
Thanks to #Ohgodwhy, Header set Access-Control-Allow-Origin "*" in .htaccess in domain1.
It is not clear what you want Exactly .. If you past Your code Here it will be Excellent ..
But i think if you want to Connect any 'js' file from any domain to your domain you can use the ordinary deceleration for it :
in HTML:
<script type = "text/javascript" src="https://googledrive.com/host/0B248VFEZkAAiNjhxaDNUZVpsVHM" charset="utf-8"></script>
in PHP :
echo '<script type = "text/javascript" src="https://googledrive.com/host/0B248VFEZkAAiNjhxaDNUZVpsVHM" charset="utf-8"></script>';
Very important note :
1- You must take care of Script order for dependent script.
2- The element which you call must be attend and visible during cal time.
Javascript don't allow cross domain AJAX call.
There ae some options available to do that like JSONP
See this link for more options : link

javascript / php - Get src of image from the URL of the site

I noticed that at http://avengersalliance.wikia.com/wiki/File:Effect_Icon_186.png, there is an image (a small one) there. Click on it, you will be brought to another page: http://img2.wikia.nocookie.net/__cb20140312005948/avengersalliance/images/f/f1/Effect_Icon_186.png.
For http://avengersalliance.wikia.com/wiki/File:Effect_Icon_187.png, after clicking on the image there, you are brought to another page: http://img4.wikia.nocookie.net/__cb20140313020718/avengersalliance/images/0/0c/Effect_Icon_187.png
There are many similar sites, from http://avengersalliance.wikia.com/wiki/File:Effect_Icon_001.png, to http://avengersalliance.wikia.com/wiki/File:Effect_Icon_190.png (the last one).
I'm not sure if the image link is somewhat related to the link of its parent site, but may I know, is it possible to get http://img2.wikia.nocookie.net/__cb20140312005948/avengersalliance/images/f/f1/Effect_Icon_186.png string, from the string http://avengersalliance.wikia.com/wiki/File:Effect_Icon_186.png, using PHP or JavaScript? I would appreciate your help.
Here is a small PHP script that can do this. It uses CURL to get content and DOMDocument to parse HTML.
<?php
/*
* For educational purposes only
*/
function get_wiki_image($url = '') {
if(empty($url)) return;
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
$output = curl_exec($curl);
curl_close($curl);
$DOM = new DOMDocument;
libxml_use_internal_errors(true);
$DOM->loadHTML($output);
libxml_use_internal_errors(false);
return $DOM->getElementById('file')->firstChild->firstChild->getAttribute('src');
}
echo get_wiki_image('http://avengersalliance.wikia.com/wiki/File%3aEffect_Icon_186.png');
You can access for example by class and then select the one than you want with [n], after that getAttribute and you got it
document.getElementsByClassName('icon cup')[0].getAttribute('src')
Hope it helps

Categories

Resources