This is my first question on SO, but you have all helped me enormously in the past from existing posts - so thank you!
I am working on a Web/Database system using localhost through Xampp, but need to backup sql file to my one&one online server. I am using CORS for cross-domain with js to make the backup and it works on my PC, but not my clients. The request onload works for us both, as the files are saved, but my client does not receive the response message to confirm it has saved!! Anyone know why this might be - we are both running IE9 and same xampp versions.
Code I am using for CORS request is:
var request = new XMLHttpRequest();
request.open('POST', "http://www.mysite/Backups", true);
request.onload = function()
{
if (request.status === 200)
{ //response functions here}
request.send("Content="+backupContent);
}
Hope this is in the correct question format - its my first time remember!
I had a year ago a really similar problem with IE. Your client is using IE, that means they are quite big and serious, so I bet they also have specific settings for IE security.
Go to your IE security preferences and restrict everything you can - I cannot tell you exactly the name of the property, I have no explorer anymore, but with this you can reproduce this behaviour.
How to solve the issue? Usually they don't agree on changing their security settings, so the only way that worked for me is using JSONP instead of CORS. I know: not modern, uglly... But that works.
This is just a guess, I trust that everything is done correctly on your side.
Related
I have currently a simple webpage which just consists out of a .js, .css .html file. I do not want to use any Node.js stuff.
Regarding these limits I would like to ask if it is possible to search content of external webpages using javascript (e.g. running a webworker in background).
E.g. I would like to do:
Get first url link of a google image search.
Edit:
I now tried it and it worked find however after 2 Weeks I get now this error:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at ....
(Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
any ideas how to solve that?
Here is the error described by firefox:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSMissingAllowOrigin
Yes, this is possible. Just use the XMLHttpRequest API:
var request = new XMLHttpRequest();
request.open("GET", "https://bypasscors.herokuapp.com/api/?url=" + encodeURIComponent("https://duckduckgo.com/html/?q=stack+overflow"), true); // last parameter must be true
request.responseType = "document";
request.onload = function (e) {
if (request.readyState === 4) {
if (request.status === 200) {
var a = request.responseXML.querySelector("div.result:nth-child(1) > div:nth-child(1) > h2:nth-child(1) > a:nth-child(1)");
console.log(a.href);
document.body.appendChild(a);
} else {
console.error(request.status, request.statusText);
}
}
};
request.onerror = function (e) {
console.error(request.status, request.statusText);
};
request.send(null); // not a POST request, so don't send extra data
Note that I had to use a proxy to bypass CORS issues; if you want to do this, run your own proxy on your own server.
Yes, it is theoretically possible to do “web scraping” (i.e. parsing webpages) on the client. There are several restrictions however and I would question why you wouldn’t choose a program that runs on a server or desktop instead.
Web workers are able to request HTML content using XMLHttpRequest, and then parse the incoming XML programmatically. Note that the target webpage must send the appropriate CORS headers if it belongs to a foreign domain. You could then pick out content from the resulting HTML.
Parsing content generated with CSS and JavaScript will be harder. You will either have to construct sandboxed content on your host page from the input stream, or run some kind of parser, which doesn’t seem very feasible.
In short, the answer to your question is yes, because you have the tools to do a network request and a Turing-complete language with which to build any kind of parsing and scraping that you wanted. So technically anything is possible.
But the real question is: would it be wise? Would you ever choose this approach when other technologies are at hand? Well, no. For most cases I don’t see why you wouldn’t just write a server side program using e.g. headless Chrome.
If you don’t want to use Node - or aren’t able to deploy Node for some reason - there are many web scraping packages and prior art in languages such as Go, C, Java and Python. Search the package manager of your preferred programming language and you will likely find several.
I heard about python for scraping too, but nodejs + puppeteer kick ass... And is pretty easy to learn
I am currently in the process of creating a browser extension for a university project. However as I was writing down the extension I hit a really weird problem. To understand fully my situation I will need to describe it in debt from where my issue comes.
The extension that I am currently working on has to have a feature that checks if the browser can connect to the internet or not. That is why I decided to create a very simple AJAX request function and depending on the result returned by this function to determine if the user has internet connection or not.
That is why I created this very simple AJAX function that you can see bellow this line.
$.ajax({
url: "https://enable-cors.org/index.html",
crossDomain: true,
}).done(function() {
console.log("The link is active");
}).fail(function() {
console.log("Please try again later.");
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
So far, as long as I understand what it is doing, it is working fine. For example, if you run the function as it is, it will succsesfully connect to the url and process with the ".done(function..." if you change the url to "index273.index" a file which does not exist it will process with the ".fail(function...". I was happy with the result until I decided to test it further more and unpluged my cable out of my computer. Then when I launched the extension it returned the last result from when the browser had connection with the internet. My explanation why the function is doing this is because it is caching the url result and if it cannot connect it gives the last cached value. My next step to try and solve this was to add "cache: false" after the "crossDomain: true" property but after that when I launch the extension it gives the following error:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://enable-cors.org/index?_=1538599523573. (Reason: CORS header 'Access-Control-Allow-Origin' missing).
If someone can help me out sorting this problem I would be extremely grateful. I would want to apologise in advance for my English but this is not my native language.
PS: I am trying to implement this function in the popup menu, not into the "content_scripts" category. I am currently testing this under Firefox v62.0.3 (the latest available version when I write this post).
Best regards,
George
Maybe instead of calling the URL to check if the internet connection is available you could try using Navigator object: https://developer.mozilla.org/en-US/docs/Web/API/Navigator/connection
unless the remote server allowed origin (allowed cors) then you can't access it because it's a security issue.
But there are other things you can do:
You can load image and fire event when an image is loaded
You can access remote JSON via JSONP response
but you can't access other pages because (unless that server allows it) it's a security issue.
I am trying to access website APIs from my local drive to work with their data. I followed the JSON doc on MDN (https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Objects/JSON) and it worked great. I can access and get the same data as shown in the doc with the code below. However when I change to a different website's API, I get an "Access-Control-Allow-Origin" error. For example if I use (http://quotesondesign.com/wp-json/posts).
After doing some search online, it seems to have to do with CORS setup on some servers. I read through the CORS doc (https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) but still having trouble with it.
Some search results also mentioned a work around like using proxy but I am reluctant to use it as it seems unwieldy. I also tried Jquery and AJAX but I still get the same error when it tries to open the request.
<script>
var header = document.querySelector('header');
var section = document.querySelector('section');
var requestURL = 'https://mdn.github.io/learning-area/javascript/oojs/json/superheroes.json';
var request = new XMLHttpRequest();
request.open('GET', requestURL);
request.responseTYPE = 'json';
request.send();
request.onload=function() {
var superHeroes = JSON.parse(request.response); //Parse the files
populateHeader (superHeroes);
showHeroes(superHeroes);
}
...
When I say local drive, I mean I saved the html code in a notepad file on my C:\document drive and running the file there.
Any advice? Thanks.
Web browsers agree with the server CORS policy by default. Developers sometimes disable security protections to test things out. Be sure to turn it back on for normal browsing.
With Chrome on windows, this flag enabled that API to work for me:
chrome.exe --user-data-dir="C:/Chrome dev session" --disable-web-security
This works well in a shortcut for special uses.
More info on bypassing CORS in Chrome and FireFox here:
https://pointdeveloper.com/how-to-bypass-cors-errors-on-chrome-and-firefox-for-testing/
Update: In Firefox, the about:config option to disable "strict_origin_policy" is limited. So to get this working, you can try the CORS Everywhere add-on. This worked for me on Firefox 52.8.
The CORS Everywhere icon will toggle green after you click it, which enables CORS.
I've created some code using curl (PHP) which allows me to spoof the referrer or blank the referer then direct the user to another page with an spoofed referrer.
However the drawback to this is the IP address in the headers will always be the IP of my server, which isn't a valid solution.
The question;
Is it possible using client side scripting i.e. (xmlhttprequest) to "change" the referrer then direct the user to a new page?
Thus keeping the users IP address intact but spoofing the referrer.
If yes, any help would be much appreciated.
Thanks!
not from javascript in a modern browser when the page is rendered.
Update:
See comments for some manual tools and other javascript-based platforms where you technically can spoof the referrer. In the context of the 8-year-old original question which seems to be related to make web requests, the answer is still generally "no."
I don't plan to edit all of my decade-old answers though so downvoters, have at `em. I apologize in advance for not correctly forseeing the future and providing an answer that will last for eternity.
This appears to work in the Firefox Javascript console:
var xhr = new XMLHttpRequest;
xhr.open("get", "http://www.example.com/", true);
xhr.setRequestHeader( 'Referer', 'http://www.fake.com/' );
xhr.send();
In my server log I see:
referer: http://www.fake.com/
Little late to the table, but it seems there's been a change since last post.
In Chrome (probably most modern browsers at this time) are no longer allowing 'Referer' to be altered programmatically - it's now static-ish.
However, it does allow a custom header to be sent. E.g.:
var xhr = new XMLHttpRequest;
xhr.open("get", "http://www.example.com/", true);
xhr.setRequestHeader('CustomReferer', 'http://www.fake.com/');
xhr.send();
In PHP that header can be read through "HTTP_(header in uppercase)":
$_SERVER['HTTP_CUSTOMREFERER'];
That was the trick for my project...
For many of us probably common knowledge, but for some hopefully helpful!
You can use Fetch API to partially modify the Referer header.
fetch(url, {
referrer: yourCustomizedReferer, // Note: it's `referrer` with correct spelling, and it's NOT nested inside `headers` option
// ...
});
However, I think it only works when the original Referer header and your wanted Referer header are under the same domain. And it doesn't seem to work in Safari.
Allowing to modify Referer header is quite unexpected though it's argued here that there are other tricks (e.g. pushState()) to do this anyway.
I have written an XMLHttpRequest which runs fine but returns an empty responseText.
The javascript is as follows:
var anUrl = "http://api.xxx.com/rates/csv/rates.txt";
var myRequest = new XMLHttpRequest();
callAjax(anUrl);
function callAjax(url) {
myRequest.open("GET", url, true);
myRequest.onreadystatechange = responseAjax;
myRequest.setRequestHeader("Cache-Control", "no-cache");
myRequest.send(null);
}
function responseAjax() {
if(myRequest.readyState == 4) {
if(myRequest.status == 200) {
result = myRequest.responseText;
alert(result);
alert("we made it");
} else {
alert( " An error has occurred: " + myRequest.statusText);
}
}
}
The code runs fine. I can walk through and I get the readyState == 4 and a status == 200 but the responseText is always blank.
I am getting a log error (in Safari debug) of Error dispatching: getProperties which I cannot seem to find reference to.
I have run the code in Safari and Firefox both locally and on a remote server.
The URL when put into a browser will return the string and give a status code of 200.
I wrote similar code to the same URL in a Mac Widget which runs fine, but the same code in a browser never returns a result.
Is http://api.xxx.com/ part of your domain? If not, you are being blocked by the same origin policy.
You may want to check out the following Stack Overflow post for a few possible workarounds:
Ways to circumvent the same-origin policy
PROBLEM RESOLVED
In my case the problem was that I do the ajax call (with $.ajax, $.get or $.getJSON methods from jQuery) with full path in the url param:
url: "http://mydomain.com/site/cgi-bin/serverApp.php"
But the correct way is to pass the value of url as:
url: "site/cgi-bin/serverApp.php"
Some browser don't conflict and make no distiction between one text or another, but in Firefox 3.6 for Mac OS take this full path as "cross site scripting"... another thing, in the same browser there is a distinction between:
http://mydomain.com/site/index.html
And put
http://www.mydomain.com/site/index.html
In fact it is the correct point view, but most implementations make no distinction, so the solution was to remove all the text that specify the full path to the script in the methods that do the ajax request AND.... remove any BASE tag in the index.html file
base href="http://mydomain.com/" <--- bad idea, remove it!
If you don't remove it, this version of browser for this system may take your ajax request like if it is a cross site request!
I have the same problem but only on the Mac OS machine. The problem is that Firefox treat the ajax response as an "cross site" call, in any other machine/browser it works fine. I didn't found any help about this (I think that is a firefox implementation issue), but I'm going to prove the next code at the server side:
header('Content-type: application/json');
to ensure that browser get the data as "json data" ...
The browser is preventing you from cross-site scripting.
If the url is outside of your domain, then you need to do this on the server side or move it into your domain.
This might not be the best way to do it. But it somehow worked for me, so i'm going to run with it.
In my php function that returns the data, one line before the return line, I add an echo statement, echoing the data I want to send.
Now sure why it worked, but it did.
Had a similar problem to yours. What we had to do is use the document.domain solution found here:
Ways to circumvent the same-origin policy
We also needed to change thins on the web service side. Used the "Access-Control-Allow-Origin" header found here:
https://developer.mozilla.org/En/HTTP_access_control