I have used CORS in my application. In code I mentioned that particular URL(Domain) only need to access my API. Below is the code,
public static void Register(HttpConfiguration config)
{
var cors = new EnableCorsAttribute("www.Test.com", "*", "*");
config.EnableCors(cors);
}
As per CORS policy, if I call the above API using the domain www.Test.com, then the API response will be shown in my browser(JavaScript client), whereas if I call from another domain(let say www.sample.com) response will not be shown in browser(JavaScript client). This is working fine in the browsers Chrome and Microsoft Edge.
Whereas when I run it from IE browser, this is not working, Even though I call the API from different domain(let say www.sample.com) still the browser render the response. Is there any issue with IE.
This is because internet explorer 9 and below ignores Access-Control-Allow headers and by default prohibits cross-origin requests for Internet Zone.
as stated in webdavsystem.com:
Internet Explorer 9 and earlier ignores Access-Control-Allow headers
and by default prohibits cross-origin requests for Internet Zone. To
enable cross-origin access go to Tools->Internet Options->Security
tab, click on “Custom Level” button. Find the Miscellaneous -> Access
data sources across domains setting and select “Enable” option.
Yes, IE is a pain and IE 11 plus Edge are implying to try and make it a little easier but when dealing with IE 9 and below its still going to be as hard as it gets.
Related
Update: As of Sep 7 2019 (Firefox 79), this is no longer the case. Must have been a mistake in FF 51.
This question is for the Firefox devs/pros out there. I can't find any info on this.
I've longed assumed XMLHttpRequest (XHR) and similar APIs such as the new Fetch API are not supposed to work locally (i.e. file uri), and only work on http or https uri scheme. It's supposed to be some big security risk.
In the past, the only way to circumvent this in Firefox was changing security.fileuri.strict_origin_policy to false in about:config. To my surprise, I can use both XHR and Fetch API without changing that setting in the latest Firefox.
Why does it start working out of nowhere on Firefox 51.0.1? Is this a bug, a new standard, or some vendor-specific thing? Is Chrome going to follow along with this?
To see what I mean, create an index.htm with some JS code and a test.txt with some text, and open the index.htm in Firefox locally.
Put this in the HTML:
<script>fetch("test.txt").then(function(response) {
return response.text();
}).then(function(text) {
alert(text);
});</script>
In Firefox it should show an alert box with the contents of the text file. In Chrome/Canary, it will complain:
Fetch API cannot load file:///R:/test/test.txt. URL scheme must be "http" or "https" for CORS request.
we are using a keycloak 1.3.1 authentication library, and I've noticed that once I initialize the keycloak with { onLoad: 'login-required' }, IE (11) gets infinite loop...
Other browsers work fine.
I'm basically doing this:
keycloak.init({ onLoad: 'login-required' }).success(function(authenticated) {
console.info(authenticated ? 'authenticated' : 'not authenticated');
some other stuff...
}).error(function() {
console.warn('failed to initialize');
});
Any idea what's causing it, and to solve this? Trying to install the newest version 1.4.0 now in hopes the weird bug gets solved.
Thanks in advance.
I had the same problem with keycloak v1.5.0.Final / Internet Explorer 11, and finally figured out what is going on.
1. Behind the scene
When using modes 'login-required' or 'check-sso' in Keycloak's init method, Keycloak Javascript Adapter sets an iframe that checks at timed intervals that user is authenticated.
This iframe is retrieved from keycloak's server (let's say http(s)://yourkeycloakhost:port):
http(s)://yourkeycloakhost:port/auth/realms/yourrealm/protocol/openid-connect/login-status-iframe.html?client_id=yourclientid&origin=http(s)://yourorigin
and its content is a javascript script which should be able to access KEYCLOAK_SESSION cookie previously set by keycloak on authentication (on the same domain ie http(s)://yourkeycloakhost:port).
2. The problem with IE
Yes! Here is the problem with Internet Explorer, which has a strict policy with iframes and cookies. Actually, the keycloak iframe does NOT have access to the yourkeycloakhost domain cookies due to its P3P policy (Microsoft Internet Explorer is the only major browser to support P3P).
This problem is well described on this stackoverflow question
3. Resolution
The solution is to make Internet Explorer trust our keycloak's domain (yourkeycloakhost) for using cookies, so that the iframe is able to read the KEYCLOAK_SESSION cookie value, and register it in its data.
To do that, your keycloak server must append HTTP response header with P3P information. You can do that with an apache or nginx proxy that will always set proper headers. I did that with apache and it's mod_headers module:
Header always set P3P "CP=ALL DSP COR CUR ADM PSA CONi OUR SAM OTR UNR LEG"
You can learn more on P3P with W3C and/or validate your P3P Policy with this P3P validator.
4. Consequence
You can have a look at keycloak's iframe code :
var cookie = getCookie('KEYCLOAK_SESSION');
if (cookie) {
data.loggedIn = true;
data.session = cookie;
}
Now the cookie on domain yourkeycloakhost is retrieved correctly by Internet Explorer, and the problem is fixed!
A workaround that worked for me, learnt from keycloak documentation, add the parameter checkLoginIframe when executing init method : .init({onLoad: 'login-required', checkLoginIframe: false})
The Keycloak developers fixed this problem, as described by #François Maturel, in version 1.9.3. See for more information issue #2828.
I have a web page that serves as a configuration editor, which means that it will be accessed by opening the .html file and not using http.
This page needs to access to another file (the configuration file to be edited), located in the same directory. The file is accessed using a relative path General.json.
var getJSONFileContent = function( url ) {
return $.ajax({
type: "GET",
url: url,
async: false
}).responseText;
};
var currentConfigAsJson = getJSONFileContent( "General.json" );
It works perfectly on Firefox, without changing settings or anything, but it fails on both IE and chrome.
Chrome error:
file:///C:/Users/XXX/Desktop/XXX/General.json.
Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.m.ajaxTransport.send
# jquery-1.11.3.min.js:5m.extend.ajax
# jquery-1.11.3.min.js:5getJSONFileContent
# General.html:68(anonymous function)
# General.html:75m.Callbacks.j
# jquery-1.11.3.min.js:2m.Callbacks.k.fireWith
# jquery-1.11.3.min.js:2m.extend.ready
# jquery-1.11.3.min.js:2J
# jquery-1.11.3.min.js:2
Internet Explorer error:
SCRIPT5: Access denied.
Fichier : jsoneditor.min.js, line : 7, column : 8725
I read that this is forbidden on Chrome (and probably IE and others) for security reasons, and that I have to start chrome with special arguments to bypass this.
But why is it working on Firefox? Is there a way to make it work on Chrome without passing special arguments when running chrome ?
Are there Chrome specific features that would allow me to read/write files without having to change settings or pass arguments ? An-end user wouldn't want to bother with that.
To solve the origin issue, set up a web server and host your page via localhost.
If you are releasing an HTML based app, you would probably include a web server in your app.
Another approach is to give a try on NW (formerly NodeWebkit) which includes a Chromium with very high authorities that allows you to do the job.
It's rather opinion based to assume the reason why this works and that doesn't. But Chrome and IE are products belong to some company, while Firefox is supported by Mozilla foundation. So it makes sense that commercial companies acting much more sensible on security issues for their interests. Meanwhile, Mozilla foundation would like to be more experimental on techniques, regarding Brendan Eich (the creator of JavaScript) is a big one in Mozilla.
Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.”
https://www.rfc-editor.org/rfc/rfc2616#section-15.1.3
According to the standard, https://google.com shouldn't send the Referer header to non-secure sites, but it does. Do other HTTPS sites send the Referer header to HTTP sites?
All these tests are done using Chrome v33.0.1750.117
To run the test I go to the first page, then open the console and manually do a redirect, using location = "http://reddit.com":
https://google.com -> http://www.reddit.com
Referer header is kept
https://startpage.com/ -> http://www.reddit.com Referer header is stripped
https://bankofamerica.com -> http://reddit.com Referer header is stripped
https://facebook.com -> http://reddit.com Referer header is stripped
Is Google doing something special to keep the Referer header? Is there a list of HTTPS sites that keep the Referer header? Are there any other cases where the Referer header is removed?
When you do a Google Search with Google Chrome, the following tag appears in the search results:
<meta content="origin" id="mref" name="referrer">
The origin value means that instead of completely omitting the Referer when going to http from https, the origin domain name should be provided, but not the exact page within the site (e.g. search strings will remain private).
On the other hand, link aggregators like lobsters have the following, which ensures that the whole URL will always be provided in the Referer (by browsers like Chrome and Safari), since link stories are public anyways:
<meta name="referrer" content="always" />
As of mid-2014, this meta[#name="referrer"] is just a proposed functionality for HTML5, and it doesn't appear to have been implemented in Gecko, for example -- only Chrome and Safari are claimed to support it.
http://smerity.com/articles/2013/where_did_all_the_http_referrers_go.html
https://bugzilla.mozilla.org/show_bug.cgi?id=704320
http://wiki.whatwg.org/wiki/Meta_referrer
cnst answers this correctly above; it's content="origin". That forces browsers going HTTPS->HTTPS and HTTPS->HTTP to have the request header:
http-referer=https://www.google.com
This functionality allows sites to get credit for traffic without leaking URL parameters to a third party. It's awesome, as it's so much less hacky than what people have used here in the past.
There are currently three competing specs for this. I don't know which one is authoritative, and suspect it's a mix. They're similar, on most points.
http://www.w3.org/TR/referrer-policy/
http://w3c.github.io/webappsec/specs/referrer-policy/
https://wiki.whatwg.org/wiki/Meta_referrer
Here's available support, that I know of; would love for people to let me know if I'm wrong or missing anything.
Now:
Chrome 17+ supports this on desktop
Chrome 25+ for mobile devices
Safari 6 on iPad and iPhone
Unknown version:
Desktop Safari 7 supports this; possible support in earlier versions, but I don't have a browser to confirm.
Upcoming real soon now:
IE12 Beta has working support (new this week).
Firefox 38 has the code checked in for a May 2015 release. https://bugzilla.mozilla.org/show_bug.cgi?id=704320
I think its because Google uses
<meta name="referrer" content="always">
So when a person goes from HTTPS to a HTTP site, the referrer is kept. Otherwise, without this the referrer would be stripped.
I make an Ajax request in which I set the response cacheability and last modified headers:
if (!String.IsNullOrEmpty(HttpContext.Current.Request.Headers["If-Modified-Since"]))
{
HttpContext.Current.Response.StatusCode = 304;
HttpContext.Current.Response.StatusDescription = "Not Modified";
return null;
}
HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.Public);
HttpContext.Current.Response.Cache.SetLastModified(DateTime.UtcNow);
This works as expected. The first time I make the Ajax request, I get 200 OK. The second time I get 304 Not Modified.
When I hard refresh in Chrome (Ctrl+F5), I get 200 OK - fantastic!
When I hard refresh in Internet Explorer/Firefox, I get 304 Not Modified. However, every other resource (JS/CSS/HTML/PNG) returns 200 OK.
The reason is because the "If-Not-Modified" header is sent for XMLHttpRequest's regardless of hard refresh in those browsers. I believe Steve Souders documents it here.
I have tried setting an ETag and conditioning on "If-None-Match" to no avail (it was mentioned in the comments on Steve Souders page).
Has anyone got any gems of wisdom here?
Thanks,
Ben
Update
I could check the "If-Modified-Since" against a stored last modified date. However, hopefully this question will help other SO users who find the header to be set incorrectly.
Update 2
Whilst the request is sent with the "If-Modified-Since" header each time. Internet Explorer won't even make the request if an expiry isn't set or is set to a future date. Useless!
Update 3
This might as well be a live blog now. Internet Explorer doesn't bother making the second request when localhost. Using a real IP or the loopback will work.
Prior to IE10, IE does not apply the Refresh Flags (see http://blogs.msdn.com/b/ieinternals/archive/2010/07/08/technical-information-about-conditional-http-requests-and-the-refresh-button.aspx) to requests that are not made as a part of loading of the document.
If you want, you can adjust the target URL to contain a nonce to prevent the cached copy from satisfying a future request. Alternatively, you can send max-age=0 to force IE to conditionally revalidate the resource before each reuse.
As for why the browser reuses a cached resource that didn't specify a lifetime, please see http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx
The solution i came upon for consistent control was managing the cache headers for all request types.
So, I forced standard requests the same as XMLHttpRequests, which was telling IE to use the following cache policy: Cache-Control: private, max-age=0.
For some reason, IE was not honoring headers for various requests types. For example, my cache policy for standard requests defaulted to the browser and for XMLHttpRequests, it was set to the aforementioned control policy. However, making a request to something like /url as a standard get request, render the result properly. Unfortunately, making the same request to /url as an XMLHttpRequest, would not even hit the server because the get request was cached and the XMLHttpRequest was hitting the same url.
So, either force your cache policy on all fronts or make sure you're using different access points (uri's) for your request types. My solution was the former.