Solutions to Unsafe JavaScript attempt? - javascript

Unsafe JavaScript attempt to access frame with URL http://starproject.galaxy-games.com/content/play-star-project from frame with URL http://spgame.galaxy-games.com/star_galaxy/File/Game_Main.php. Domains, protocols and ports must match.
SO i get two of these when i'm trying to access this site, aswell as event.layerX and event.layerY are broken and deprecated in WebKit. They will be removed from the engine in the near future.
How can i solve this?

The owner of the site (presumably galaxy-games.com) needs to correct their Javascript. As an end user there isn't anything you can do.
Specifically, Javascript doesn't allow a frame on starproject.x.com to access variables and other objects in a frame on spgame.x.com because it considers they are different sites. This prevents cross-site scripting or "XSS" attacks. It's fixed by including matching window.domain statements in the Javascript.
WebKit's event object is also objecting to what it's being asked to do, and the code needs to be re-written so it doesn't use layerX and layerY.
None of this is within the control of end-users.

Related

List of XSS payloads with automatic Javascript/etc. execution?

I've been recently dealing alot with XSS and payload creation. Generally, when creating the injection, there are 2 different types of XSS:
Automatic execution when loaded.
Execution which requires additional user interaction.
As you can see in the title, I'm looking for a list of payloads/injections, which lead to automatic code (js mainly) execution. To name a few:
<script>alert(1)</script>
<body onload="alert(1)">
<button autofocus onfocus="alert(1)">
.
.
.
The scheme used for the payload should be unique. (Naming other elements in the context of autofocus/onfocus attributes like input or textarea would be redundant)
The payload should be supported by at least one of the following Browsers(recent Versions):
Chrome
Firefox
Safari
IE
You should check out the following resources:
https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet
And
http://html5sec.org
Edit: This is a moving target. Adding all the content from html5sec here would make the answer obsolete/wrong within short time. Check the size of that resource to understand why.

Do or did javascript: URLs ever work in CSS, and if so, can it be prevented?

So I saw a code snippet today and was horrified:
<p style='background-image: url("javascript:alert(&apos;foo&apos;);");'>Hello</p>
Is it possible to execute javascript from within CSS this way? (It didn’t work when I tested it on a clean Firefox profile, but maybe I made some stupid mistake here, but the concept works.)
If so, what means are there to prevent this, either with an HTTP header or by declarations made by the HTML itself (e.g. when sourcing CSS files from another server)?
If not, was this never possible or has this changed?
The current CSS spec says only "valid image formats" can be used in a background-image:
In some cases, an image is invalid, such as a ‘<url>’ pointing to a resource that is not a valid image format. An invalid image is rendered as a solid-color ‘transparent’ image with no intrinsic dimensions. [...] If the UA cannot download, parse, or otherwise successfully display the contents at the URL as an image, it must be treated as an invalid image.
The spec is silent on whether or not a javascript: url that returns valid image data would work -- it'd be an interesting exercise to try to construct one! -- but I'd be pretty darn surprised if it did.
User agents may vary in how they handle invalid URIs or URIs that designate unavailable or inapplicable resources.
(As #Kaiido points out below, scripts within SVG will not run in this situation either, so I'd expect the whole javascript: protocol to be treated as an "inapplicable resource".)
IE supports CSS expressions:
width:expression(document.body.clientWidth > 955 ? "955px": "100%" );
but they are not standard and are not portable across browsers. Avoid them if possible. They are deprecated since IE8.
Yes, in the past this attack vector worked (older browsers like IE6). I believe most modern browsers should protect against this kind of attack. That said, there can always be more complicated attacks that may get around current protections. If you are including any user-generated content anywhere, it is best to sanitize it before injecting it into your site.
It's possible to execute JavaScript where a URI is expected by prefixing it with javascript:. This is, in fact, how bookmarklets work. I don't think however that this would work with css url(), but it does with href or window.location.
say foo
I think whoever wrote that bit of code was confused about it.

Selector interpreted as HTML on my website. Can an attacker easily exploit this?

I have a website that is only accessible via https.
It does not load any content from other sources. So all content is on the local webserver.
Using the Retire.js Chrome plugin I get a warning that the jquery 1.8.3 I included is vulnerable to 'Selector interpreted as HTML'
(jQuery bug 11290)
I am trying to motivate for a quick upgrade, but I need something more concrete information to motivate the upgrade to the powers that be.
My question are :
Given the above, should I be worried ?
Can this result in a XSS type attack ?
What the bug is telling you is that jQuery may mis-identify a selector containing a < as being an HTML fragment instead, and try to parse and create the relevant elements.
So the vulnerability, such as it is, is that a cleverly-crafted selector, if then passed into jQuery, could define a script tag that then executes arbitrary script code in the context of the page, potentially taking private information from the page and sending it to someone with malicious (or merely prurient) intent.
This is largely only useful if User A can write a selector that will later be given to jQuery in User B's session, letting User A steal information from User B's page. (It really doesn't matter if a user can "tricky" jQuery this way on their own page; they can do far worse things from the console, or with "save as".)
So: If nothing in your code lets users provide selectors that will be saved and then retrieved by other users and passed to jQuery, I wouldn't be all that worried. If it does (with or without the fix to the bug), I'd examine those selector strings really carefully. I say "with or without the bug" because if you didn't filter what the users typed at all, they could still just provide an HTML fragment where the first non-whitespace character is <, which would still cause jQuery to parse it as an HTML fragment.
As the author of Retire.js let me shed some light on this. There are two weaknesses in older versions of jQuery, but those are not vulnerabilities by themselves. It depends on how jQuery is used. Two examples abusing the bugs are shown here: research.insecurelabs.org/jquery/test/
The two examples are:
$("#<img src=x onerror=...>")
and
$("element[attribute='<img src=x onerror=...>'")
Typically this becomes a problem if you do something like:
$(location.hash)
This was a fairly common pattern for many web sites, when single page web sites started to occur.
So this becomes a problem if and only if you put untrusted user data inside the jQuery selector function.
And yes the end result is XSS, if the site is in fact vulnerable. https will not protect you against these kinds of flaws.

Using type="text/plain" for JavaScript?

I was looking at setting this up, simply out of curiosity, however I was a little bemused when they stated for this to work you need to:
Find any Javascript elements that set Analytics cookies. Examples might include Google Analytics and StatCounter.
Modify the script tag so that the type attribute is "text/plain" rather than "text/javascript"
Would this cause any problems with certain web browsers? Would it cause the HTML to no longer validate?
Also, does the "type" attribute even really serve a purpose anymore? I've only ever seen it assigned "text/JavaScript" before?
It does not cause problems, if the intent is that browsers do not interpret the content of the element as script code but just as text data that is not rendered. It’s there for scripts to use it, but otherwise it’s ignored. Well, in some browsers, the content might be made visible using CSS, but by default it’s not shown.
Using <script type="text/plain"> is valid by HTML specs. Even <script type="Hello world ☺"> is valid, though it violates the prose requirement that the attribute value be a MIME type. The specs do not specify its meaning, but the only feasible interpretation, and what browsers do in practice, is that it is not in any scripting language and no execution as script is attempted.
So type="text/plain" may be used to intentionally prevent execution of a script, while still keeping it in the source. It may also be used to carry bulks of character data used for some processing.
The type attribute may serve purposes like this, and it can also be used to specify scripting languages other than JavaScript (rarely used, but still possible in some environments). Using the type attribute just to specify JavaScript is not needed, and cannot be recommended: the only thing that you might achieve is errors: if you mistype, e.g. type="text/javascirpt", the content will be regarded as being in an unknown language, hence ignored.
Would this cause any problems with certain web browsers?
No
Would it cause the HTML to no longer validate?
No
Also, does the "type" attribute even really server a purpose anymore?
Browsers use it to decide what interpretor to run code through (or if they should download externally srced code at all).
Setting it to text/plain sets it to a type that browsers won't have interpretors for (since it isn't a programming language), which is the point.
Giving JS code the text/plain type is perfectly ok and will effectively disable it. To enable the script after the page was loaded you will need to have JS code to rewrite the type to text/javascript on the fly. Cookie blocking is a common example. Insert a cookie-setting script with text/plain and change to text/javascript when the user gives cookie consent. This will execute the script immediately, on the current page, no need to reload it.
"application/javascript" is what it must be, according to the latest w3 specifications. But as expected, most of the older versions of IE does not support this. So it is safe to use "text/javascript" everywhere.

Would this be a good idea against XSS?

as it isn't really popular to use Origin / X-Frame-Options http header and I don't think the new CSP in Firefox would be better (overhead, complicate, etc.) I want to make a proposal for a new JavaScript / ECMA version.
But first I publish the idea so you can say if its bad. I call it simple jsPolicy:
Everyone who uses JavaScript has placed scripts in his html head. So why don't we use them to add our policies there to control all following scripts. example:
<html>
<head>
<title>Example</title>
<script>
window.policy.inner = ["\nfunction foo(bar) {\n return bar;\n}\n", "foo(this);"];
</script>
</head>
<body>
<script>
function foo(bar) {
return bar;
}
</script>
Click Me
<script>
alert('XSS');
</script>
</body>
</html>
Now the browser compares the <scripts>.innerHTML and the onclick.value with the ones in the policy and so the last script element block is not executed (ignored).
Of course it won't be useful to double all the inline code, so we use checksums instead. example:
crc32("\nfunction foo(bar) {\n return bar;\n}\n");
results "1077388790"
And now the full example:
if (typeof window.policy != 'undefined') {
window.policy.inner = ["1077388790", "2501246156"];
window.policy.url = ["http://code.jquery.com/jquery*.js","http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"];
window.policy.relative = ["js/*.js"];
window.policy.report = ["api/xssreport.php"];
}
The browser only needs to compare if the checksum of an inline script is set in the policy.inner or if the script.src URL fits to the policy.url.
Note: The idea behind policy.relative is to allow local scripts only:
window.policy.url = false;
window.policy.relative = ["js/*.js"];
Note: policy.report should be nearly the same as done with CSP (sends blocked scripts and urls to an api):
https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-unofficial-draft-20110315.html#violation-report-syntax
Important:
The policy can't be set twice (else it throws a warning) = constant
To think about: The policy can only be set in the head (else it throws a warning)
The policy is only used to check the scripts that are part of the html source and not those that are placed on-the-fly. example:
document.write('<script src="http://code.jquery.com/jquery-1.5.2.min.js"></scr' + 'ipt>');
You don't need a policy.url definition for "http://code.jquery.com..." as the policy.inner checksum validated the complete script source. This means the source is loaded even if policy.url is set to false (yes it's still secure!). This garantuees a simple usage of the policy.
if one of the policies is missing there is no limitation. This means that an empty policy.relative results that all local files are allowed. This guarantees backward compatibility
if one of the policies is set to "false" no usage is allowed (default is true). example:
policy.inner = false;
This disallows any inline scripting
The policy only ignores disallowed scripts and throws a warning to the console (an error would stop the execution of allowed scripts and this isn't needed)
I think this would make XSS impossible and instead of CSP it would avoid persistent XSS as well (as long nobody overwrites the Policy) and it would be much easier to update.
What do you think?
EDIT:
Here is an example made in Javascript:
http://www.programmierer-forum.de/php/js-policy-against-xss.php
Of course we can't control the script execution, but it shows how it could work if a jsPolicy compatible browser would.
EDIT2:
Don't think I'm talking about coding a little javascript function to detect xss!! My jsPolicy idea has to be part of a new JavaScript engine. You can compare it to a php-setting placed into the .htaccess file. You can not change this setting in runtime. The same requirements apply to jsPolicy. You can call it a global setting.
jsPolicy in short words:
HTML parser -> send scripts to JavaScript Engine -> compare with jsPolicy -> is allowed?
A) yes, execution through JavaScript Engine
B) no, ignored and send report to webmaster
EDIT3:
Referenced to Mike's comment this would be a possible setting, too:
window.policy.eval = false;
Cross-site scripting occurs on the client-side. Your policies are defined on the client-side. See the problem?
I like Content Security Policy, and I use it on all of my projects. In fact, I am working on a JavaScript framework, which has one of its requirements "be CSP-friendly."
CSP > crossdomain.xml > your policy.
The vast majority of XSS attacks come from "trusted" sources, at least as far as the browser is concerned. They are usually the result of echo'ing user input, e.g. in a forum, and not properly escaping the input. You're never going to get an XSS from linking to jquery, and it is extremely rare that you will from any other linked source.
In the case when you are trying to do cross-domain scripting, you can't get a checksum on the remote script.
So although your idea seems fine, I don't really see a point to it.
This idea keeps getting floated and re-floated every so often... and each time security experts debunk it.
Don't mean to sound harsh, but this is not a development problem, it is a security problem. Specifically, most developers don't realize how many variants, vectors, exploits and evasion techniques there are.
As some of the other answers here mentioned, the problem is that your solution does not solve the problem, of whether or not to trust whatever arrives at the browser, since on the client side you have no way of knowing what is code, and what is data. Even your solution does not prevent this.
See e.g. this question on ITsec.SE for some of the practical issues with implementing this.
(your question is kinda a duplicate of that one, more or less... )
Btw, re CSP - check this other question on ITsec.SE.
The policy is only used to check the scripts that are part of the html source and not those that are placed on-the-fly. example:
document.write('');
You don't need a policy.url definition for "http://code.jquery.com..." as the policy.inner checksum validated the complete script source. This means the source is loaded even if policy.url is set to false (yes it's still secure!). This garantuees a simple usage of the policy.
It seems like you've given the whole game away here.
If I have code like
// Pull parameters out of query string.
var match = location.search.match(/[&?]([^&=]+)=([^&]*)/);
window[decodeURIComponent(match[1])](decodeURIComponent(match[2]));
and someone tricks a user into visiting my site with the query string ?eval=alert%28%22pwned%22%29 then they've been XSSed, and your policy has done nothing to stop it.

Categories

Resources