Prevent XSS attacks site-wide - javascript

I'm new to ColdFusion, so I'm not sure if there's an easy way to do this. I've been assigned to fix XSS vulnerabilities site-wide on this CF site. Unfortunately, there are tons of pages that are taking user input, and it would be near impossible to go in and modify them all.
Is there a way (in CF or JS) to easily prevent XSS attacks across the entire site?

I hate to break it out to you, but -
XSS is an Output problem, not an Input problem. Filtering/Validating input is an additional layer of defence, but it can never protect you completely from XSS. Take a look at XSS cheatsheet by RSnake - there's just too many ways to escape a filter.
There is no easy way to fix a legacy application. You have to properly encode anything that you put in your html or javascript files, and that does mean revisiting every piece of code that generates html.
See OWASP's XSS prevention cheat sheet for information on how to prevent XSS.
Some comments below suggest that input validation is a better strategy rather than encoding/escaping at the time of output. I'll just quote from OWASP's XSS prevention cheat sheet -
Traditionally, input validation has been the preferred approach for handling untrusted data. However, input validation is not a great solution for injection attacks. First, input validation is typically done when the data is received, before the destination is known. That means that we don't know which characters might be significant in the target interpreter. Second, and possibly even more importantly, applications must allow potentially harmful characters in. For example, should poor Mr. O'Malley be prevented from registering in the database simply because SQL considers ' a special character?
To elaborate - when the user enters a string like O'Malley, you don't know whether you need that string in javascript, or in html or in some other language. If its in javascript, you have to render it as O\x27Malley, and if its in HTML, it should look like O'Malley. Which is why it is recommended that in your database the string should be stored exactly the way the user entered, and then you escape it appropriately according to the final destination of the string.

One thing you should look at is implementing an application firewall like Portcullis: http://www.codfusion.com/blog/page.cfm/projects/portcullis which includes a much stronger system then the built in scriptProtect which is easily defeated.
These are a good starting point for preventing many attacks but for XSS you are going to end up going in by hand and verifying that you are using things like HTMLEditFormat() on any outputs that can be touched by the client side or client data to prevent outputting valid html/js code.

The ColdFusion 9 Livedocs describe a setting called "scriptProtect" which allows you to utilize coldfusion's protection. I've have not used it yet, so I'm not sure how effective it is.
However, if you implement a third-party or your own method of handling it, you would most likely want to put it in the "onRequestStart" event of the application to allow it to handle the entire site when it comes to URL and FORM scope violations (because every request would execute that code).

Besides applying all the ColdFusion hot fixes and patches you can also:
Not full proof but helps, Set the following under CFADMIN > Settings > "Enable Global Script Protection"
Add CSRFToken to your forms http://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet
Check http Referer
Add validation for all User inputs
Use cfqueryparam for your queries
Add HTMLEditFormat() on any outputs
Besides Peter Freitag's excellent blog you should also subscribe to Jason Dean's http://www.12robots.com

Related

Shouldn't I follow the OWASP DOM based XSS recommendations no matter where the payload is injected?

I heard/read at various contexts that DOM based XSS is caused by untrusted client side input and developers need to follow instructions at the OWASP "DOM based XSS Prevention Cheat Sheet" in order to mitigate it.
My question is: Shouldn't this guide be used irrespective of where the malicious payload is injected (client side which can be from DOM elements like URL, or server side which can be from parameters of immediate previous requests) if you are inserting untrusted data into javascript execution contexts?
Lets keep aside the debate of if it will be called DOM based XSS or not in the later case because I am more interested in knowing if this guide should be applied irrespective of where the payload is coming from (server/client) if you are putting untrusted data in execution contexts.
As a general rule, you should never put unsanitized user input anywhere on your site. You shouldn't display it, you should't use it to form any queries and you certainly shouldn't include it. Anywhere a user, or a 3rd party of sorts, is inputting any form of data at all, you should sanitize it, and ask yourself:
How could this be used maliciously?
SQL Injection is one to look out for, and of course XSS is a big one too as you mentioned. XSS is injecting code which is then outputted, and of course SQLi is using input to form a SQL query which could be potentially catastrophic. Of course, there are plenty other attacks which are not injection-based and do not rely on user input, but just sanitize absolutely everything to avoid any problems down the road.
Hope that answers your question, was a bit difficult to understand exactly what you were asking.

When to clean trailing/leading whitespace from user input - On Input or Before DB Insertion?

I've got a question which would be applicable to an MVC based platform, but I guess also applicable to any web based platform which handles user form inputs.
What are the best practices, and ideal stage from which to remove trailing/leading whitespace from user input?
I see this could happen at a few stages:
Immediately Upon User Form Input - ala Javascript functions to strip as they type/pre-submission
Inside the Controller on Params Submission
Intermediate Model/Attribute Methods
Prior to or upon Database Persistence
What is best practice in this regard, and specifically the pro's/con's for doing it at a certain stage, or multiple?)
I think it depends on the type of application:
For a standard web app, I would say you definitely want to clean data on the browser sometime before submission so that you can validate it (for ex. an email would fail validation if it has a leading space or a length check). It is better to validate without sending data to the server when possible.
If you are writing an API, especially a public one, I would definitely clean the data server side or return an error. You can't trust clients to send you clean data. I would probably do it in the model before validation which shouldn't be to hard to do automatically.
If bad data can cause a security issue (XSS or SQL injection) then you want to clean it on the server as well as the client. Even on a web app there is nothing stopping a malicious user faking a request from a web browser. If spaces in the data won't break anything then this may not be necessary (if someone 'maliciously' adds a leading space to their blog title it might look weird but it is only going to harm them)
This is a very opinion based question I think. It would depends on the persons who is implementing and also the application.
If you don't have to clean immediately after user input, I would say avoid #1 since it will be confusing to your users while they are typing, and also it can have a performance impact on slower/smaller devices.
#2 and #3 will both be very similar, a nice thing about #3 is that if you're using the same property in many places, your logic for trimming will live in only one place, but both will run on your server which takes away the perf hit from client device.
#4 depending on your DBMS can be very easy or difficult to implement.
I would personally choose #2 or #3, but again that's my opinion and someone else can have a completely different one than mine.
Also you certainly don't need to do it multiple if you get one stage right.

how to avoid cross site scripting in javascript used in tcl script? [duplicate]

This question already has answers here:
What are the common defenses against XSS? [closed]
(4 answers)
Closed 8 years ago.
while working on project open which is open source application, the url http://[host_ip]:8000/register/ includes Java Scripts which are vulnerable to cross-site scripting and Authentication Bypass Using SQL Injection.
I want to know that how can I avoid it? do I have to insert filter for that? and how should I do that?
please let me know if the problem is not clear to understand.
SQL Injection
The universal answer to SQL Injection problems is “never send any user input to the database as part of an SQL string”. Anything that can go as a parameter should do so. Thus, instead of (in some dialect that might not exactly match what you're looking at):
db eval "SELECT userid FROM users WHERE username = '$user' AND password = '$pass'"
you do:
db eval "SELECT userid FROM users WHERE username = ? AND password = ?" $user $pass
# I personally prefer to put SQL inside {braces}… but that's your call
The key is that because the database engine just understands that these are parameters, it never tries to interpret them as SQL. Injection Impossible. (Unless you're using badly-written stored procedures.)
It gets much more complex where you want to have a table or column name specified by a user. That's a case where you can't send it as a parameter; such SQL identifiers must be interpreted by the SQL engine. Your only alternatives there are to either remap from user-supplied terms to ones that you control, or to rigorously validate.
Remapping is done by having a separate trivial table that maps from user-supplied names to ones you've generated:
db eval {SELECT realname FROM namemap WHERE externalname = ?} $externalname
Because the generated name is easy to guarantee to be free of nasty characters and not to be one of SQLs keywords, it can be safely used in SQL text without further quoting. You can also try doing the mapping per request (factor out the mapping code to a procedure of course) by stripping all bad characters from it. A suitable regsub might be:
regsub -all {\W+} $externalname "" realname
but then we need additional checks to see that it isn't “evil”:
# You'll need to create an array, SQLidentifiers, first, perhaps like:
# array set SQLidentifers {UPDATE - SELECT - REPLACE - DELETE - ALTER - INSERT -}
# But you can do that once, as a global "constant"
if {[regexp {^\d} $realname] || [info exist SQLidentifiers([string toupper $realname])]} {
error "Bad identifier, $externalname"
}
As you can see, it's a good idea to factor out such transforms and checks into their own procedure so you get them right, once.
And you must test your code extensively. I cannot stress that hard enough. Your tests must try really hard to break things, to make SQL injections via every possible field that anyone could pass into the software; not one of them should ever result in anything happening that your code ever expects.
It's probably a good idea to get someone else to write at least some of the tests; experience from the security community suggests that it is relatively easy to write code that you can't break yourself, but much harder to write code that someone else can't break. Also consider doing fuzz testing, sending computer-generated random data at the interface. In all cases, either things should give a graceful error or should succeed, but never ever cause the application to outright fail.
(You might well allow highly-authenticated users — system/database administrators — to outright specify SQL to evaluate so they can do things like setting the system up, but they're the minority case.)
Cross-site Scripting
This is actually conceptually quite similar: it's caused (principally) by someone putting something in your site that unexpectedly gets interpreted as HTML (or CSS, or Javascript) rather than as human-readable text (with SQL injection, it's something getting interpreted as SQL rather than as data). Because you can't do the equivalent of parameterised queries when going back to the client, you have to use careful quoting. You're strongly recommended to do the careful quoting by using a proper templating library that constructs a DOM tree (with data coming from users or from the database being only ever inserted as text nodes).
If you want users to supply a marked up piece of text, consider either delivering it back as plain text before using Javascript to render it as, say, Markdown, or completely parsing the user-supplied text on the server to construct a model (e.g., DOM tree) of what should be delivered, before sending it back as HTML generated from that model.
You must not allow users to specify a location where you load a script or frame from. Even allowing them to specify links is worrying, but you probably have to permit that if you can't restrict things to straight plain text. (Consider adding a mechanism for listing all links that have been supplied by users. Consider marking all external links with rel=nofollow unless you can positively detect that they go to somewhere that you whitelist.)
Direct supply of HTML is a “highly-authenticated users only” operation.
(I told a lie above. You can do the equivalent of SQL parameterised queries. You write JS that the client executes to fetch the user data using an AJAX query, perhaps serialized as JSON, and then do DOM manipulations there to render it; in effect, you're moving the DOM construction from the server to the client, but you're still doing DOM construction as that's the core of how you get this right. You have to remember to never insert the things retrieved as straight HTML though. Clients must not trust the server too much.)
The comments I made above above about testing apply here too. With testing for XSS, you're looking to inject something like <script>alert("boom!")</script>; any time you can get that in and cause a popup dialog — except by being a system administrator with direct permission to edit HTML directly — you've got a massive dangerous hole to plug. (It's quite a good thing to try to inject, as it is very noticeable and yet fairly benign in itself.)
Don't try to just filter out <script> using regular expressions. It's far too hard to get that right.

Security comparison of eval and innerHTML for clientside javascript?

I've been doing some experimenting with innerHTML to try and figure out where I need to tighten up security on a webapp I'm working on, and I ran into an interesting injection method on the mozilla docs that I hadn't thought about.
var name = "<img src=x onerror=alert(1)>";
element.innerHTML = name; // Instantly runs code.
It made me wonder a.) if I should be using innerHTML at all, and b.) if it's not a concern, why I've been avoiding other code insertion methods, particularly eval.
Let's assume I'm running javascript clientside on the browser, and I'm taking necessary precautions to avoid exposing any sensitive information in easily accessible functions, and I've gotten to some arbitrarily designated point where I've decided innerHTML is not a security risk, and I've optimized my code to the point where I'm not necessarily worried about a very minor performance hit...
Am I creating any additional problems by using eval? Are there other security concerns other than pure code injection?
Or alternatively, is innerHTML something that I should show the same amount of care with? Is it similarly dangerous?
tl;dr;
Yes, you are correct in your assumption.
Setting innerHTML is susceptible to XSS attacks if you're adding untrusted code.
(If you're adding your code though, that's less of a problem)
Consider using textContent if you want to add text that users added, it'll escape it.
What the problem is
innerHTML sets the HTML content of a DOM node. When you set the content of a DOM node to an arbitrary string, you're vulnerable to XSS if you accept user input.
For example, if you set the innerHTML of a node based on the input of a user from a GET parameter. "User A" can send "User B" a version of your page with the HTML saying "steal the user's data and send it to me via AJAX".
See this question here for more information.
What can I do to mitigate it?
What you might want to consider if you're setting the HTML of nodes is:
Using a templating engine like Mustache which has escaping capabilities. (It'll escape HTML by default)
Using textContent to set the text of nodes manually
Not accepting arbitrary input from users into text fields, sanitizing the data yourself.
See this question on more general approaches to prevent XSS.
Code injection is a problem. You don't want to be on the receiving end.
The Elephant in the room
That's not the only problem with innerHTML and eval. When you're changing the innerHTML of a DOM node, you're destroying its content nodes and creating new ones instead. When you're calling eval you're invoking the compiler.
While the main issue here is clearly un-trusted code and you said performance is less of an issue, I still feel that I must mention that the two are extremely slow to their alternatives.
The quick answer is: you did not think of anything new. If anything, do you want an even better one?
<scr\0ipt>alert("XSSed");</scr\0ipt>
The ground, bottom line is that there are more ways to trigger XSS than you think there is. All the following are valid:
onerror, onload, onclick, onhover, onblur etc... are all valid
The use of character encoding to bypass filters (null byte highlighted above)
eval falls into another category, however - it is a byproduct, most of the time to obfuscate. If you're falling to eval and not innerHTML, you're in a very, very small minority.
The key to all this is to sanitize your data using a parser that keeps up to date with what pen testers discover. There are a couple of those around. They absolutely need to at least filter all the ones on the OWASP list - those are pretty much common.
innerHTML isn't insecure in and of itself. (Nor is eval, if only used on your code. It's actually more of a bad idea for several other reasons.) The insecurity arises in displaying visitor-submitted content. And that risk applies to any mechanism with which you embed user-content: eval, innerHTML, etc. on the client-side, and print, echo, etc. on the server-side.
Anything you put on the page from a visitor must be sanitized. It doesn't matter a great deal whether you do it when the initial page is being built or added asynchronously on the client-side.
So ... yes, you need to show some care when using innerHTML if you're displaying user-submitted content with it.

Is there a way to "stop script" from running using JavaScript?

How can I stop script from execution in JavaScript? In case of cross-site scripting (XSS) attacks, the fundamental requirements are injection + execution of script.
Imagine a scenario where attacker is able to inject JavaScript in a page & our goal is to stop attacker's script from execution. The injection point as an example can be any user-supplied input area.
Always always escape your input. For XSS, use htmlentities() to escape HTML and JS. Here's a good article on PHP Security
http://www.phpfreaks.com/tutorial/php-security
There are basically two things to be careful of when dealing with XSS:
Escape your output. Escaping the input just takes more resources for nothing. Escape your user-submitted content output. It also means that non-escaped content is in your database, which is a good thing (in case of false positives you can fix that without losing content, in case of a new XSS policy you don't need to modify all your database, etc).
Secure your javascript code. Be very careful not to include some flaw using eval() or something like it.
As others said, the best and easiest way to protect yourself from XSS is validating input and properly escape output depending on the insertion point (HTML, most likely, with entities or JavaScript / CSS blocks -- unlikely and more difficult to properly escape).
However, if your use case is outputting raw user input which is supposed to contain arbitrary HTML and you just want to prevent injected JavaScript to mess with your site, you can either:
1) Frame the content in a different, unique domain (so it cannot share cookies with your main document), e.g. xyz123.usercontent.com (with xyz123 different for any user)
2) Wait for and/or CSP's sandbox directive to be standardized in every browser you support (and, of course, denying access to uncapable browsers).
Your only solution is to prevent scripts from being injected. There's several things you can do to achieve this:
Never trust input from the user. That means form inputs, query string parameters, cookie content, or any other data obtained from an incoming request.
Sanitize everything you render, everywhere you render it. I like to achieve this with two clearly-named rendering functions in templates, render and render_unsafe. Rails has a similar interface since 3.0 which sanitizes all template data unless you specifically ask for unsanitized rendering. Having a clearly-named interface will make it easier to keep your templates in check, and ensuring that unsanitized renders are the exception forces you to make a decision every time you dump data into a template.
If you must allow the user to run functions directly, always do it through a whitelist. Have them supply a function name or some other identifier as a string and their arguments as JSON or some other parseable construct. Have a look at the design for Shopify's Liquid templating system which uses a similar execution-safe whitelisting pattern.
Never trust input from the user. Not ever.

Categories

Resources