What's the point of the Anti-Cross-Domain policy? - javascript

Why did the creators of the HTML DOM and/or Javascript decide to disallow cross-domain requests?
I can see some very small security benefits of disallowing it but in the long run it seems to be an attempt at making Javascript injection attacks have less power. That is all moot anyway with JSONP, it just means that the javascript code is a tiny bit more difficult to make and you have to have server-side cooperation(though it could be your own server)

The actual cross-domain issue is huge. Suppose SuperBank.com internally sends a request to http://www.superbank.com/transfer?amount=100&to=123456 to transfer $10,000 to account number 123456. If I can get you to my website, and you are logged in at SuperBank, all I have to do is send an AJAX request to SuperBank.com to move thousands of dollars from your account to mine.
The reason JSON-P is acceptable is that it is pretty darn impossible for it to be abused. A website using JSON-P is pretty much declaring the data to be public information, since that format is too inconvenient to ever be used otherwise. But if it's unclear as to whether or not data is public information, the browser must assume that it is not.

When cross-domain scripting is allowed (or hacked by a clever Javascripter), a webpage can access data from another webpage. Example: joeblow.com could access your Gmail while you have mail.google.com open. joeblow.com could read your email, spam your contacts, spoof mail from you, delete your mail, or any number of bad things.

To clarify some of the ideas in the questions into a specific use case..
The cross domain policy is generally not there to protect you from yourself. Its to protect the users of your website from the other users of your website (XSS).
Imagine you had a website that allowed people to enter any text they want, including javascript. Some malicious user decides to add some javascript to the "about yourself" field. Users of your website would navigate his profile and have this script executed on their browser. This script, since its being executed on your website's behalf, has access to cookies and such from your website.
If the browser allowed for cross domain communication, this script could theoretically collect your info and then upload it to a server that the malicious user would own.

Here's a distinction for you: Cross-domain AJAX allows a malicious site to make your browser to things on its behalf, while JSON-P allows a malicious server to tamper with a single domain's pages (and to make the browser do things to that domain on your behalf) but (crucial bit) only if the page served went out of its way to load the malicious payload.
So yes, JSON-P has some security implications, but they are strictly opt-in on the part of the website using them. Allowing general cross-domain AJAX opens up a much larger can of worms.

Related

Prevent local PHP/HTML files preview from executing javascript on server

I have some HTML/PHP pages that include javascript calls.
Those calls points on JS/PHP methods included into a library (PIWIK) stored onto a distant server.
They are triggered using an http://www.domainname.com/ prefix to point the correct files.
I cannot modify the source code of the library.
When my own HTML/PHP pages are locally previewed within a browser, I mean using a c:\xxxx kind path, not a localhost://xxxx one, the distant script are called and do their process.
I don't want this to happen, only allowing those scripts to execute if they are called from a www.domainname.com page.
Can you help me to secure this ?
One can for sure directly bypass this security modifying the web pages on-the-fly with some browser add-on while browsing the real web site, but it's a little bit harder to achieve.
I've opened an issue onto the PIWIK issue tracker, but I would like to secure and protect my web site and the according statistics as soon as possible from this issue, waiting for a further Piwik update.
EDIT
The process I'd like to put in place would be :
Someone opens a page from anywhere than www.domainname.com
> this page calls a JS method on a distant server (or not, may be copied locally),
> this script calls a php script on the distant server
> the PHP script says "hey, from where damn do yo call me, go to hell !". Or the PHP script just do not execute....
I've tried to play with .htaccess for that, but as any JS script must be on a client, it blocks also the legitimate calls from www.domainname.com
Untested, but I think you can use php_sapi_name() or the PHP_SAPI constant to detect the interface PHP is using, and do logic accordingly.
Not wanting to sound cheeky, but your situation sounds rather scary and I would advise searching for some PHP configuration best practices regarding security ;)
Edit after the question has been amended twice:
Now the problem is more clear. But you will struggle to secure this if the JavaScript and PHP are not on the same server.
If they are not on the same server, you will be reliant on HTTP headers (like the Referer or Origin header) which are fakeable.
But PIWIK already tracks the referer ("Piwik uses first-party cookies to keep track some information (number of visits, original referrer, and unique visitor ID)" so you can discount hits from invalid referrers.
If that is not enough, the standard way of being sure that the request to a web service comes from a verified source is to use a standard Cross-Site Request Forgery prevention technique -- a CSRF "token", sometimes also called "crumb" or "nonce", and as this is analytics software I would be surprised if PIWIK does not do this already, if it is possible with their architecture. I would ask them.
Most web frameworks these days have CSRF token generators & API's you should be able to make use of, it's not hard to make your own, but if you cannot amend the JS you will have problems passing the token around. Again PIWIK JS API may have methods for passing session ID's & similar data around.
Original answer
This can be accomplished with a Content Security Policy to restrict the domains that scripts can be called from:
CSP defines the Content-Security-Policy HTTP header that allows you to create a whitelist of sources of trusted content, and instructs the browser to only execute or render resources from those sources.
Therefore, you can set the script policy to self to only allow scripts from your current domain (the filing system) to be executed. Any remote ones will not be allowed.
Normally this would only be available from a source where you get set HTTP headers, but as you are running from the local filing system this is not possible. However, you may be able to get around this with the http-equiv <meta> tag:
Authors who are unable to support signaling via HTTP headers can use tags with http-equiv="X-Content-Security-Policy" to define their policies. HTTP header-based policy will take precedence over tag-based policy if both are present.
Answer after question edit
Look into the Referer or Origin HTTP headers. Referer is available for most requests, however it is not sent from HTTPS resources in the browser and if the user has a proxy or privacy plugin installed it may block this header.
Origin is available for XHR requests only made cross domain, or even same domain for some browsers.
You will be able to check that these headers contain your domain where you will want the scripts to be called from. See here for how to do this with htaccess.
At the end of the day this doesn't make it secure, but as in your own words will make it a little bit harder to achieve.

Scraping a remote URL for bookmarking service without getting blocked

I'm using a server-side node.js function to get the text of a URL passed by the browser, to auto-index that url in a bookmarking service. I use jsdom for server-side rendering. BUT I get blocked from popular sites, despite the requests originating from legitimate users.
Is there a way to implement the URL text extraction on the browser side, such that requests would always seem to be coming from a normal distribution of users? How do I get around the cross-site security limitations in the browser? I only need the final DOM-rendered text.
Is a bookmarklet the best solution? When the user wants to bookmark the page, I just append a form in a bookmarklet and submit the DOM-rendered text in my bookmarklet?
I know SO hates debates, but any guidance on good methods would be much appreciated.
You could certainly do it client-side but I think that would be overly complex. The client would have to send the html to your service & that would require very careful sanitising & might be difficult to control the volume of incoming data.
I would probably simply track the request domains and ensure that I limited the frequency of calls to any single domain. That should be fairly straight forward if using something like Node.JS where you could easily set up any number of background fetch tasks. This would also allow you to fine tune the bandwidth used.

Why will disabling Browser Web Security (e.g. Chrome) help doing Cross-Site-Requests?

We have several internal web applications. One of those needs to access all the other applications. Problem is: Same-Orign-Policy.
Actually I did manage to get around it. First of all, the IE is quite sloppy what concerns web security. So, it actually asked me whether I want to have these requests done or not. If I clicked yes, he just executed the cross site requests.
But since most of the users won't use IE, there was the need to make it run in another browser.
So, I tried to make it run in Google Chrome. And after some research I found out, that it will work when I turn of the Web Security by using the execution parameter --disable-web-security.
This did the job. But unfortunately, most of the users won't be using this execution parameter. Therefore I need another solution.
Then I came across CORS. CORS seems to be implemented in Chrome, but it has one drawback (for me). I need to set headers on the server side.
For reasons I won't discuss in here, this is a no go.
So what I was actually wondering about is:
Why will disabling Browser's Web Security do the job, while I need the server to allow the request when using CORS?
What exactly happens inside the browser when I disable the web security?
And is there another way to execute my CSR without adding headers on the server's side or disabling the security?
Thanks in advance
EDIT: JSONP is out of question either
Why will disabling Browser's Web Security do the job, while I need the server to allow the request when using CORS?
The point of the Same Origin Policy is to prevent Mallory's evil site from making Alice's browser go to Bob's site and expose Alice's data to Mallory.
Disabling security in the browser is, effectively, saying "I don't care about protecting my data on Bob's (or any other!) site". This is a dangerous thing to do if the browser is ever going to go near the open web. The option is provided to make development more convenient — I prefer a more controlled solution (such as the URL rewriting options in Charles proxy).
CORS is Bob's site saying "This URL doesn't contain any data that Mallory (or some other specific site, or everyone) shouldn't have access to, so they can access it. Bob's site can do this because it knows which parts of it contain public data and which parts contain private data.
What exactly happens inside the browser when I disable the web security?
It disables the aforementioned security feature and reduces the protection of the user's data.
And is there another way to execute my CSR without adding headers on the server's side or disabling the security?
A proxy. See Ways to circumvent the same-origin policy, specifically the Reverse Proxy method.
I guess you are using AJAX requests, here is another question Ways to circumvent the same-origin policy that has a big detailed answer.
You can use a Flash object (flash doesn't have this problem)
Also about "whats the worst could happen" http://blogs.msdn.com/b/ieinternals/archive/2009/08/28/explaining-same-origin-policy-part-1-deny-read.aspx and http://en.wikipedia.org/wiki/Cross-site_scripting

How are bookmarklets( javascript in a link ) verfied by servers? How is security kept?

I've been trying to access different domains from my JavaScript ( to pull the page title ) but can not b.c. of the same-origin policy.
What I realized is that JavaScript "installed" into the browser via bookmarklets is not restrained by this policy.
This got me to wondering how security is kept...for example delicious bookmarklets...I can just modify them and start ajaxing delicous.com...I don't plan on doing this but likewise someone could do this to a bookmarklet that I create.
How do you create security here?
Do some sites allow public access via ajax?
As far as the server is concerned, there is no such thing as AJAX. AJAX requests are just HTTP requests like any other.
The restriction of cross domain AJAX is done by the browser for the sake of avoiding cross site scripting attacks (you wouldn't want a third party ad to have access to your Stack Overflow session data and be able to ship that somewhere else, would you?).
The browser (apparently) does not limit "bookmarklets" in the same way. If you decided to put a bit of script into a bookmark, I guess the browser is perfectly happy to execute it.

What are the security ramifications of allowing a user insert their own JavaScript into our web application?

I have a requirement to allow a user to paste a JavaScript block into our servers hand have it added to this users webpage under head(users domain would be customName.oursite.com). What are the security ramifications of this type of feature?
You're letting them do anything that they can do with Javascript, which includes attacks like XSS attacks and CSRF attacks. They can possibly steal sessions, redirect to malicious sites, steal cookie-information, or even lock up users' browsers . Essentially there are many security ramifications and you would also be increasing your liability. Users may associate the bad behavior with your site and may not realize that it is your customer who has included malicious Javascript.
Ok, I'll give examples since everyone else explained. I had first hand experience on this as well, just for fun and practice actually:
One can create a script that makes the page look like your site's login page and record the logins. People will unknowingly login using that fraud page (with a bit of social engineering) and presto! Usernames and passwords.
One can manipulate and account without the user knowing it. Let's say I have planted malicious code on my profile page that loads an iframe with http://example.com/account.php. If the user visits the page with the script, the iframe will load with the accounts page. Since the user is logged in, his information will be shown. The script can then scrape the iframe (since it's the same domain), and forward it to the hacker's server via GET request (serialized URL).
One can plant cookies and track you. When an unsuspecting user visits a malicious page, the script can read all the cookies, forward it to the hacker's server. Since cookies are used on pretty much everything (from saving sessions, to storing shopping cart information), you've been exposed.
One can know what sites you have been on using the "visited links are purple" technique. The script can generate a list of links on the DOM, check which of them are purple, and then forward that "purple list" to the hackers.
Plant a spamming script, using the user's credentials. Say a malicious code scans the rest of the pages on your site for Facebook like buttons, then clicks them or initiates the GET/POST for the like - without them knowing. In the end, the user won't know they liked a million pages.
and so much more.. care to suggest a few possible ones?
In short, letting users post arbitrary scripts, especially in a shared space, is... unspeakable evil.
If you are running a web hosting service, each user should have his/her own "space" separate from the other users and separate from the main site. That's the reason why in free hosting, a user has an own subdomain, or in a paid hosting, an own domain. The user can do whatever they want to their space without compromising* the host or the other users under it.
* There are some exceptions. Since the user has the freedom to do anything to their space, they can turn their space into a fully malicious site. And still, other users should exercise basic site security. Even if they are in another space, there are still a lot of ways to gain entry or do something sinister.
Cross site scripting attack is a big one
XSS is the biggest issue. This could allow access to cookies, pages, give elevated privileges or redirect to malicious pages that could result in viruses across the user's machine or even a server(s).
Think of this from two perspectives: First, users can pretty much already do this via bookmarklets and the browser's development tools javascript console. Thus your servers should ALREADY be hardened against malicious clients. Remember while users may access your site with a browser, attackers can attack with arbitrary HTTP or TCP messages, and you must handle that. So from the beginning, you should be designing your server to treat the client as untrusted as best and hostile at worst.
Now specifically, it's pretty rare for sites to allow this sort of thing because as you correctly imagine, it does make things much more tricky. However, many applications need this type of solution. So just keep in mind that data you send out from your server may be accessed by untrusted, broken, leaky (in terms of privacy), or otherwise insecure user-authored javascript. Don't trust that your javascript can do any authorization filtering in the browser. All data leaving the server must already be properly filtered for authorization, and never assume your server will receive valid/authorized requests.
I think the key question is where does the JavaScript execute? If it runs on one page that is only visible to the user, then only that user could be affected by the script. If it's run on a comment board, for example, then other users could be affected by the script. The script might, for example, make an http request to a malicious server, passing along the cookie with the request.

Categories

Resources