How to get data with JavaScript from another server? - javascript

How can I make requests to other server(s) (i.e. get a page from any desired server) with a JavaScript within the user's browser? There are limitations in place to prevent this for methods like XMLHttpRequest, are there ways to bypass them or other methods?
That is a general question, specifically I want to check a series of random websites and see if they contain a certain element, so I need the HTML content of a website without downloading any additional files; all that in a JavaScript file, without any forwarding or proxy mechanism on a server.
(Note: one way is using Greasemonkey and its GM_xmlhttpRequest.)

You should check out jQuery. It has a rich base of AJAX functionality that can give you the power to do all of this. You can load in an external page, and parse it's HTML content with intuitive CSS-like selectors.
An example using $.get();
$.get("anotherPage.html", {}, function(results){
alert(results); // will show the HTML from anotherPage.html
alert($(results).find("div.scores").html()); // show "scores" div in results
});
For external domains I've had to author a local PHP script that will act as a middle-man. jQuery will call the local PHP script passing in another server's URL as an argument, the local PHP script will gather the data, and jQuery will read the data from the local PHP script.
$.get("middleman.php", {"site":"http://www.google.com"}, function(results){
alert(results); // middleman gives Google's HTML to jQuery
});
Giving middleman.php something along the lines of
<?php
// Do not use as-is, this is only an example.
// $_GET["site"] set by jQuery as "http://www.google.com"
print file_get_contents($_GET["site"]);
?>

update 2018:
You can only access cross domain with the following 4 condition
in response header has Access-Control-Allow-Origin: *
Demo
$.ajax({
url: 'https://api.myjson.com/bins/bq6eu',
success: function(response){
console.log(response.string);
},
error: function(response){
console.log('server error');
}
})
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
use server as bridge or proxy to the target
Demo:
$.ajax({
url: 'https://cors-anywhere.herokuapp.com/http://whatismyip.akamai.com/',
success: function(response){
console.log('server IP: ' + response);
},
error: function(response){
console.log('bridge server error');
}
})
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
using browser addon to enable Allow-Control-Allow-Origin: *
disable browser web security
Chrome
chrome.exe --args --disable-web-security
Firefox
about:config -> security.fileuri.strict_origin_policy -> false
end
noob old answer 2011
$.get(); can get data from jsbin.com but i don't know why it can't get data from another site like google.com
$.get('http://jsbin.com/ufotu5', {},
function(results){ alert(results);
});
demo: http://jsfiddle.net/Xj234/
tested with firefox, chrome and safari.

Write a proxy script that forwards along the http request from your domain, this will bypass the XMLHttpRequest restrictions.
If your using PHP, simply use cURL to request and read the page, then simply spit out the html as if it was from you domain.

This is rather easy... if you know the 'secret' trick almost nobody shares..
It's called Yahoo yql...
So in order to regain 'power to the user' (and returning to the convenient mantra: 'never accept no'), just use http://query.yahooapis.com/ (instead of a php? proxy serverside script).
jQuery would not be strictly needed.
EXAMPLE 1:
Using the SQL-like command:
select * from html
where url="http://stackoverflow.com"
and xpath='//div/h3/a'
The following link will scrape SO for the newest questions (bypassing cross-domain security bull$#!7):
http://query.yahooapis.com/v1/public/yql?q=select%20title%20from%20html%20where%20url%3D%22http%3A%2F%2Fstackoverflow.com%22%20and%0A%20%20%20%20%20%20xpath%3D%27%2F%2Fdiv%2Fh3%2Fa%27%0A%20%20%20%20&format=json&callback=cbfunc
As you can see this will return a JSON array (one can also choose xml) and calling the callback-function: cbfunc.
Indeed, as a 'bonus' you also save a kitten every time you did not need to regex data out of 'tag-soup'.
Do you hear your little mad scientist inside yourself starting to giggle?
Then see this answer for more info (and don't forget it's comments for more examples).
Good Luck!

You can also use a iframe to emulate an ajax request. This saves you the mess of having to code a Backend solution for a Frontend problem. Here is an example:
function setUploadEvent(typeComponet){
var eventType = "";
var iframe = document.getElementById("iframeId");
try{
/* for Mozilla / Opera9 */
if (/(?!.*?compatible|.*?webkit)^mozilla|opera/i.test(navigator.userAgent)) {
eventType = "onload";
}else{
/* IE */
eventType = "onreadystatechange";
}
iframe[eventType] = function(){
var doc = iframe.contentDocument || iframe.contentWindow.document;
var response = doc.body.innerHTML; /* or what ever content you are looking for */
}
}
catch(e){
alert("Error loading content")}
}
That should do the trick. Please note that the Browser detection line is not the cleanest, you should absolutely use the ones provided in all the most common JS frameworks (Prototype, JQuery, etc)

You will need to write a proxy on the server to do this. And all requests will be to your server and then your server will load html and send it back to client. And there are no good way to implement this via javascript only.
jQuery contains functionality to load JSON data or external scripts using XmlHttpRequest but this functionality can not be used for html pages. Also you may check this thread of jQuery mailing list.

<script language="JavaScript" type="text/javascript" src="http://www.example.com/hello.js"></script>
You add the data into hello.js in as an array, JSON or similar. Example:
var daysInMonth = new Array(31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31);
Getting a JavaScript from another server doesn't much simpler.. :-)

Thanks a lot, this is really a good trick. I did in this way:
test.html
<!DOCTYPE html>
<html>
<head>
<script>
function loadXMLDoc()
{
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","sp.php",true);
xmlhttp.send();
}
</script>
</head>
<body>
<h2>Using the XMLHttpRequest object</h2>
<div id="myDiv"></div>
<button type="button" onclick="loadXMLDoc()">Change Content</button>
</body>
</html>
sp.php
<?php
print file_get_contents("http://your.url.com/you-can-use-cross-domain");
?>

Related

online offline check using javascript [duplicate]

This question already has answers here:
Detect the Internet connection is offline?
(22 answers)
Closed 8 years ago.
How do you check if there is an internet connection using jQuery? That way I could have some conditionals saying "use the google cached version of JQuery during production, use either that or a local version during development, depending on the internet connection".
The best option for your specific case might be:
Right before your close </body> tag:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="js/vendor/jquery-1.10.2.min.js"><\/script>')</script>
This is probably the easiest way given that your issue is centered around jQuery.
If you wanted a more robust solution you could try:
var online = navigator.onLine;
Read more about the W3C's spec on offline web apps, however be aware that this will work best in modern web browsers, doing so with older web browsers may not work as expected, or at all.
Alternatively, an XHR request to your own server isn't that bad of a method for testing your connectivity. Considering one of the other answers state that there are too many points of failure for an XHR, if your XHR is flawed when establishing it's connection then it'll also be flawed during routine use anyhow. If your site is unreachable for any reason, then your other services running on the same servers will likely be unreachable also. That decision is up to you.
I wouldn't recommend making an XHR request to someone else's service, even google.com for that matter. Make the request to your server, or not at all.
What does it mean to be "online"?
There seems to be some confusion around what being "online" means. Consider that the internet is a bunch of networks, however sometimes you're on a VPN, without access to the internet "at-large" or the world wide web. Often companies have their own networks which have limited connectivity to other external networks, therefore you could be considered "online". Being online only entails that you are connected to a network, not the availability nor reachability of the services you are trying to connect to.
To determine if a host is reachable from your network, you could do this:
function hostReachable() {
// Handle IE and more capable browsers
var xhr = new ( window.ActiveXObject || XMLHttpRequest )( "Microsoft.XMLHTTP" );
// Open new request as a HEAD to the root hostname with a random param to bust the cache
xhr.open( "HEAD", "//" + window.location.hostname + "/?rand=" + Math.floor((1 + Math.random()) * 0x10000), false );
// Issue request and handle response
try {
xhr.send();
return ( xhr.status >= 200 && (xhr.status < 300 || xhr.status === 304) );
} catch (error) {
return false;
}
}
You can also find the Gist for that here: https://gist.github.com/jpsilvashy/5725579
Details on local implementation
Some people have commented, "I'm always being returned false". That's because you're probably testing it out on your local server. Whatever server you're making the request to, you'll need to be able to respond to the HEAD request, that of course can be changed to a GET if you want.
Ok, maybe a bit late in the game but what about checking with an online image?
I mean, the OP needs to know if he needs to grab the Google CMD or the local JQ copy, but that doesn't mean the browser can't read Javascript no matter what, right?
<script>
function doConnectFunction() {
// Grab the GOOGLE CMD
}
function doNotConnectFunction() {
// Grab the LOCAL JQ
}
var i = new Image();
i.onload = doConnectFunction;
i.onerror = doNotConnectFunction;
// CHANGE IMAGE URL TO ANY IMAGE YOU KNOW IS LIVE
i.src = 'http://gfx2.hotmail.com/mail/uxp/w4/m4/pr014/h/s7.png?d=' + escape(Date());
// escape(Date()) is necessary to override possibility of image coming from cache
</script>
Just my 2 cents
5 years later-version:
Today, there are JS libraries for you, if you don't want to get into the nitty gritty of the different methods described on this page.
On of these is https://github.com/hubspot/offline. It checks for the connectivity of a pre-defined URI, by default your favicon. It automatically detects when the user's connectivity has been reestablished and provides neat events like up and down, which you can bind to in order to update your UI.
You can mimic the Ping command.
Use Ajax to request a timestamp to your own server, define a timer using setTimeout to 5 seconds, if theres no response it try again.
If there's no response in 4 attempts, you can suppose that internet is down.
So you can check using this routine in regular intervals like 1 or 3 minutes.
That seems a good and clean solution for me.
You can try by sending XHR Requests a few times, and then if you get errors it means there's a problem with the internet connection.
I wrote a jQuery plugin for doing this. By default it checks the current URL (because that's already loaded once from the Web) or you can specify a URL to use as an argument. Always doing a request to Google isn't the best idea because it's blocked in different countries at different times. Also you might be at the mercy of what the connection across a particular ocean/weather front/political climate might be like that day.
http://tomriley.net/blog/archives/111
i have a solution who work here to check if internet connection exist :
$.ajax({
url: "http://www.google.com",
context: document.body,
error: function(jqXHR, exception) {
alert('Offline')
},
success: function() {
alert('Online')
}
})
Sending XHR requests is bad because it could fail if that particular server is down. Instead, use googles API library to load their cached version(s) of jQuery.
You can use googles API to perform a callback after loading jQuery, and this will check if jQuery was loaded successfully. Something like the code below should work:
<script type="text/javascript">
google.load("jquery");
// Call this function when the page has been loaded
function test_connection() {
if($){
//jQuery WAS loaded.
} else {
//jQuery failed to load. Grab the local copy.
}
}
google.setOnLoadCallback(test_connection);
</script>
The google API documentation can be found here.
A much simpler solution:
<script language="javascript" src="http://maps.google.com/maps/api/js?v=3.2&sensor=false"></script>
and later in the code:
var online;
// check whether this function works (online only)
try {
var x = google.maps.MapTypeId.TERRAIN;
online = true;
} catch (e) {
online = false;
}
console.log(online);
When not online the google script will not be loaded thus resulting in an error where an exception will be thrown.

Cookies not working when page accessed via file://

My code works in firefox and when i visit w3schools using chrome to test my code in their editor it works fine too but when i launch my code in chrome from notepad++ it doesn't work.It seems that body onload is not working because i don't get the alert.My chrome is up to date.Help would be appreciated.
<!DOCTYPE html>
<html>
<head>
<script>
function setCookie(cname,cvalue,exdays){
var d=new Date();
d.setTime(d.getTime()+(exdays*24*60*60*1000));
var expires="expires="+d.toUTCString();
document.cookie=cname +"="+cvalue+"; "+expires;
}
function f(){
var user=prompt("What is your name?","");
if(user!="" && user!=null){
setCookie("username",user,30);}
}
function getC(cname){
var name=cname+"=";
var ca=document.cookie.split(";");
for(var i=0;i<ca.length;i++){
var c=ca[i];
while(c.charAt(0)==" ")c=c.substring(1);
if(c.indexOf(name)==0) return c.substring(name.length,c.length);
}
return "";
}
function checkcooki(){
var user=getC("username");
if(user!=""){
alert("Welcome back "+user);
}
}
</script>
</head>
<body onLoad="checkcooki()">
<input type="button" onclick="f()" value="klick">
</body>
</html>
For a fact: Using the file:// protocol does NOT guarantee the proper workings with cookies. Since cookies need 3 things:
A name-value pair containing the actual data
An expiry date after which it is no longer valid
The domain and path of the server it should be sent to
The domain tells the browser to which domain the cookie should be sent. If you don't specify it, it becomes the domain of the page that sets the cookie.
On a file:// protocol you don't have a domain.
Now some browsers might have found work-arounds for this, like FireFox and IE. You can test your code on these browsers but they will not use cookies in the same way as on a webserver.
Proper x-browser testing in your case requires the http:// protocol.
I suggest you start a jsfiddle or setup a webserver(IIS, apache).
Proper read on cookies: quircksmode
If you are still persistent to get it working on chrome through the file:// protocol you might have a small chance if you get the path correctly.
path: properly escaped path => encodeURIComponent(document.domain) or "c:\/my%20folder\/index.html" (along these lines but again, very untrustworthy information here)
domain: "/" (no idea what else you can try here)
Your user variable must be a blank string.
Put an alert at the very top of your checkcooki() function to verify that body onload works.

Page Navigation on Windows Phone 8 using HTML/JavaScript/Apache Cordova

I have some problems with page navigation using Windows Phone 8 with Apache Cordova 3.0.
I tried different ways to solve this problems but it still does not work.
At first i tried to use forms to navigate to another page.
<form action="CreateUser.html" method="get">
<input class="buttons" name="btnCreateUser" type="submit" value="Create User" />
</form>
When i click on the button the page can not be found. The CreateUser.html page is in the same directory. If i use a Browser (Chrome/IE) it works.
When i change the action to http://www.google.com both options (Browser and Phone) work.
I also tried to navigate to another page by using JavaScript. Here is my code:
function get(httpUrl) {
var xmlHttp = new XMLHttpRequest();
xmlHttp.open("GET", httpUrl, true);
xmlHttp.send(null);
return xmlHttp.responseText;
}
Now i use onclick="get("CreateUser")" event of the button but there is no reaction.
Both in the browser and on the mobile device.
The only thing that worked for me is the window.location feature. But it seems that i can't transform informations on the next page with that way.
Is there any opportuinity to navigate between those two pages and transfer some information?
Or did i just something wrong in my code?
"The CreateUser.html page" if i'm correct you are using AJAX to read file (page) contents and paste them in HTML?
If yes, then read this:
2.1. Cross-domain problem
Before making AJAX request you must allow cross-domain requests and core support, by setting:
jQuery.support.cors = true;
$.mobile.allowCrossDomainPages = true;
Those must be set in a specific-phonegap function “DeviceReady”, example:
document.addEventListener('deviceready', function () {
jQuery.support.cors = true;
$.mobile.allowCrossDomainPages = true;
$.ajax({
url: "www/about.txt",
dataType: 'text'
}).done(function (result) {
alert(result);
});
});
2.2. url
Making Windows Phone 8 oriented application, in AJAX request you MUST specify full path to resource, example:
url: "www/about.txt",
Making Windows Phone 8 oriented application, in AJAX request you MUST NOT specify full path to resource, example:
url: "about.txt",
2.3. Source File extensions
Be careful using unknown extension files, like template extension *.tpl or similar. Sometimes AJAX doesn’t like them, I suggest using simple *.txt and *.html extensions.

How do I store static content of a web page in a javascript variable?

This is what I am trying to accomplish:
Get the static content of an 'external' url and check it for certain keywords for example, "User Guide" or "page not found".
I tried to use Ajax, dojo.xhr etc., but they don't support cross domain. In my case it is an external url. Also, I cannot use jQuery.
I also looked at dojo.io.iframe but I couldn't find useful example to accomplish this.
A dojo.io.iframe example would be really helpful.
Please help.
Thanks!
Modern browsers restrict the use of cross-domain scripting. If you're the maintainer of the server, read Access-Control-Allow-Origin to get knowledge on how to enable cross-site scripting on your website.
EDIT: To check whether an external site is down or not, you could use this method. That external site is required to have an image file. Most sites have a file called favicon.ico at their root directory.
Example, testing whether http://www.google.com/ is online or not.
var test = new Image();
//If you're sure that the element is not a JavaScript file
//var test = document.createElement("script");
//If you're sure that the external website is reliable, you can use:
//var test = document.createElement("iframe");
function rmtmp(){if(tmp.parentNode)tmp.parentNode.removeChild(tmp);}
function online(){
//The website is likely to be up and running.
rmtmp();
}
function offline(){
//The file is not a valid image file, or the website is down.
rmtmp();
alert("Something bad happened.");
}
if (window.addEventListener){
test.addEventListener("load", online, true);
test.addEventListener("error", offline, true);
} else if(window.attachEvent){
test.attachEvent("onload", online);
test.attachEvent("onerror", offline);
} else {
test.onload = online;
test.onerror = offline;
}
test.src = "http://www.google.com/favicon.ico?"+(new Date).getTime();
/* "+ (new Date).getTime()" is needed to ensure that every new attempt
doesn't get a cached version of the image */
if(/^iframe|script$/i.test(test.tagName)){
test.style.display = "none";
document.body.appendChild(test);
}
This will only work with image resources. Read the comments to see how to use other sources.
Try this:
<script src="https://ajax.googleapis.com/ajax/libs/dojo/1.6.1/dojo/dojo.xd.js.uncompressed.js" type="text/javascript" djConfig="parseOnLoad:true"></script>
<script>
dojo.require("dojo.io.script");
</script>
<script>
dojo.addOnLoad(function(){
dojo.io.script.get({
url: "http://badlink.google.com/",
//url: "http://www.google.com/",
load: function(response, ioArgs) {
//if no (http) error, it means the link works
alert("yes, the url works!")
}
});
});
</script>

Is there a way to have browsers ignore or override xml-stylesheet processing instructions?

I'm trying to write a bookmarklet to help some QA testers submit useful debugging information when they come across issues. Currently I can set window.location to a URL that provides this debugging information, but this resource is an XML document with an xml-stylesheet processing directive.
It would actually be more convenient if the testers were able to see either the raw XML data as plain text, or the default XML rendering for IE and Firefox.
Does anyone know a way to disable or override xml-stylesheet directives provided inside an XML document, using Internet Explorer or Firefox?
Edit: I've opened a bounty on this question. Requirements:
Client-side code only, no user intervention allowed
Solutions for both IE and Firefox needed (they can be different solutions)
Disabling stylesheet processing and rendering it as text is acceptable
Overriding stylesheet processing with a custom XSL is acceptable
Rendering the XML with the browser default XML stylesheet is acceptable
EDIT: too bad, though all seemed fine in the preview, the clickable examples seem to mess up things... Maybe the layout is fine in the history.
I've heard, but cannot validate for IE, that both IE and Firefox support the "view-source:" pseudo-protocol. Firefox on Mac indeed understands it, but Safari does not.
The following bookmarklet will not trigger the XSLT transformation specified in the XML. And though Firefox will render this using some colours, it does not execute the default transformation it would normally use for XML without any XSLT (so, the result of view-source does NOT yield a collapsable document tree that Firefox would normally show):
javascript:(function(){
var u = 'http://www.w3schools.com/xsl/cdcatalog_with_ex1.xml';
var w = window.open();
w.document.location.href = 'view-source:' + u;
})()
When fetching the document using Ajax then one is not limited to the alert oneporter used, but can display it in a new window as well. Again: this will not invoke the XSLT transformation that is specified:
javascript:(function(){
var u = 'http://www.w3schools.com/xsl/cdcatalog_with_ex1.xml';
var w = window.open(); /* open right away for popup blockers */
var x = new XMLHttpRequest();
x.open('GET', u, true);
x.onreadystatechange = function(){
if(x.readyState == 4){
w.document.open('text/html');
/* hack to encode HTML entities */
var d = document.createElement('div');
var t = document.createTextNode(x.responseText);
d.appendChild(t);
w.document.write('<html><body><pre>'
+ d.innerHTML + '</pre></body></html>');
w.document.close();
w.focus();
}
};
x.send(null);
})()
Can't you just do "View Source" in both browsers?
You can avoid a processing instruction by using an intermediary step to pre-process the XML before the content is output in the browser.
Client-side suggestion
Retrieve the relevant XML document via an AJAX request
Parse the XML into a DOM (note: a DOM not the DOM)
Traverse the DOM and render in the browser the required data
Server-side suggestion
Instead of directly requesting the pertinent XML document, make a request instead to a proxy script that removed from the XML content all processing instructions, or indeed all that you don't want.
Instead of:
window.location = 'http://example.com/document.xml';
use:
window.location = 'http://example.com/proxy/';
The script at http://example.com/proxy/ would:
Load document.xml
Use whatever is necessary to remove the processing instruction from the XML content
Output the XML content
As long as you wouldn't have to deal with cross domain permissions, a simple ajax request / alert box with the XML source would do the trick. You'll have to add a little bit to the xmlHttp declaration to make it compatible with IE.
<html>
<body>
<script language="JavaScript">
function ajaxFunction()
{
var xmlHttp=new XMLHttpRequest();;
xmlHttp.onreadystatechange=function()
{
if(xmlHttp.readyState==4)
{
alert(xmlHttp.responseText);
}
}
xmlHttp.open("GET","YOURFILE.xml",true);
xmlHttp.send(null);
}
</script>
Errors
</body>
</html>
As far as I know, there is no way of doing what you are trying to do. The issue is, javascript cannot read the dom of the xml from a client side xml/xsl transform. As far as the javascript is concerned, it is executing on a normal html dom.
However, there may be some hope depending on the type of webapp. You could use ajax to get the xml of the current url. As long as there is no post data, or any other randomness, this method should work fine.

Categories

Resources