Catching Content-Security-Policy exceptions with Javascript - javascript

We have implemented CSP in our huuuge one-page web app. We keep getting errors time to time and some errors were result of improperly handled user-form inputs and some seem to originate from within user's browser.
Is there a way to harvest more data on client side when CSP exception happens? The server-side report misses vital data. We know source source URL, blocked URL, violated rule/directive... but that is not helpful as we don't know what was on the screen at the moment and if it is user's browser to blame.
We need to know
what javscript module was being viewed at the time/what JS layer/detail
where was the original source/trigger to request that resource - was it our fault of displaying uncleaned form-embedded HTML injected by some robot or was it user's infected browser inserting these malware links?
Is there some hook like window.onCSPException=func that we can hook on and run client-side analysis? We need to distinguish fast our problems from client browser's problems.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP#Enabling_reporting:
By default, violation reports aren't sent. To enable violation reporting, you need to specify the report-uri policy directive, providing at least one URI to which to deliver the reports:
Content-Security-Policy: default-src 'self'; report-uri http://reportcollector.example.com/collector.cgi
Then you need to set up your server to receive the reports; it can store or process them in whatever manner you feel is appropriate.

Related

Proxied Browser Requests with Cookies through a node server

What I am trying to achieve is displaying html content to a user from another server (external server That I do not own) that is normally accessed through a browser.
To get the right html to display to the user, cookies related to that server's domain must be provided.
APPROACH 1:
Easiest approach I tried is using an iframe. And of course, I got the usual CSP error from the chrome console:
Refused to frame '{{URL}}' because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'self' {{ACCEPTED DOMAINS}}".
APPROACH 2:
I made a local proxy server in node.js and tried to setup a proxied request in browser axios:
axios.get("{{URL to external server}}", { proxy: { host: "localhost", port: 5050 }});
I quickly realized that axios proxy is a node only feature per this issue.
CONCLUSION:
I can't find a way to make a GET request to get html in a browser to a cookie protected and CSP protected server/domain. If I am able to display html in the first place, I can probably use the same system to show a login page to update the user's cookies. Just a heads up I am using react / node.js.
If you know of a better approach that completely scraps away my code or uses a different technology, please share!
Thank you in advance.

Capturing errors in a third party cross domain script

I have a script loaded from a 3rd party domain (3p.com) into a webpage (page.com)
The goal is to capture error and unhandledrejection occurring in the 3p script and ping this back to a server for logging.
Whilst there are event handlers available in the browser for this, it seems they do not allow any meaningful collection of information, from the MDN site:
When an error occurs in a script, loaded from a different origin, the details of the error are not reported to prevent leaking information (see bug 363897).
Is there a way to do this? Ideal solution is global and hooks into these events, then allows filtering to only errors from the concerned script. However if would be happy to also wrap the script execution into a try / catch if this would capture all possible errors.
One option would be to set up your server so that requests to that third-party domain get bounced off your server, so that your clients have the ability to programatically examine possible errors that get thrown. For example, instead of serving to your clients
<script src="https://3p.com/script.js"></script>
serve
<script src="https://yoursite.com/get3pscript"></script>
where the get3pscript route on your site makes a request to https://3p.com/script.js, then sends the response to the client, making sure the MIME type is correct. Then the client will be able to do whatever it wants with the script element without cross-domain issues.

Prevent local PHP/HTML files preview from executing javascript on server

I have some HTML/PHP pages that include javascript calls.
Those calls points on JS/PHP methods included into a library (PIWIK) stored onto a distant server.
They are triggered using an http://www.domainname.com/ prefix to point the correct files.
I cannot modify the source code of the library.
When my own HTML/PHP pages are locally previewed within a browser, I mean using a c:\xxxx kind path, not a localhost://xxxx one, the distant script are called and do their process.
I don't want this to happen, only allowing those scripts to execute if they are called from a www.domainname.com page.
Can you help me to secure this ?
One can for sure directly bypass this security modifying the web pages on-the-fly with some browser add-on while browsing the real web site, but it's a little bit harder to achieve.
I've opened an issue onto the PIWIK issue tracker, but I would like to secure and protect my web site and the according statistics as soon as possible from this issue, waiting for a further Piwik update.
EDIT
The process I'd like to put in place would be :
Someone opens a page from anywhere than www.domainname.com
> this page calls a JS method on a distant server (or not, may be copied locally),
> this script calls a php script on the distant server
> the PHP script says "hey, from where damn do yo call me, go to hell !". Or the PHP script just do not execute....
I've tried to play with .htaccess for that, but as any JS script must be on a client, it blocks also the legitimate calls from www.domainname.com
Untested, but I think you can use php_sapi_name() or the PHP_SAPI constant to detect the interface PHP is using, and do logic accordingly.
Not wanting to sound cheeky, but your situation sounds rather scary and I would advise searching for some PHP configuration best practices regarding security ;)
Edit after the question has been amended twice:
Now the problem is more clear. But you will struggle to secure this if the JavaScript and PHP are not on the same server.
If they are not on the same server, you will be reliant on HTTP headers (like the Referer or Origin header) which are fakeable.
But PIWIK already tracks the referer ("Piwik uses first-party cookies to keep track some information (number of visits, original referrer, and unique visitor ID)" so you can discount hits from invalid referrers.
If that is not enough, the standard way of being sure that the request to a web service comes from a verified source is to use a standard Cross-Site Request Forgery prevention technique -- a CSRF "token", sometimes also called "crumb" or "nonce", and as this is analytics software I would be surprised if PIWIK does not do this already, if it is possible with their architecture. I would ask them.
Most web frameworks these days have CSRF token generators & API's you should be able to make use of, it's not hard to make your own, but if you cannot amend the JS you will have problems passing the token around. Again PIWIK JS API may have methods for passing session ID's & similar data around.
Original answer
This can be accomplished with a Content Security Policy to restrict the domains that scripts can be called from:
CSP defines the Content-Security-Policy HTTP header that allows you to create a whitelist of sources of trusted content, and instructs the browser to only execute or render resources from those sources.
Therefore, you can set the script policy to self to only allow scripts from your current domain (the filing system) to be executed. Any remote ones will not be allowed.
Normally this would only be available from a source where you get set HTTP headers, but as you are running from the local filing system this is not possible. However, you may be able to get around this with the http-equiv <meta> tag:
Authors who are unable to support signaling via HTTP headers can use tags with http-equiv="X-Content-Security-Policy" to define their policies. HTTP header-based policy will take precedence over tag-based policy if both are present.
Answer after question edit
Look into the Referer or Origin HTTP headers. Referer is available for most requests, however it is not sent from HTTPS resources in the browser and if the user has a proxy or privacy plugin installed it may block this header.
Origin is available for XHR requests only made cross domain, or even same domain for some browsers.
You will be able to check that these headers contain your domain where you will want the scripts to be called from. See here for how to do this with htaccess.
At the end of the day this doesn't make it secure, but as in your own words will make it a little bit harder to achieve.

tinymce storing generated markup in database security concerns

I have used tinymce as a part of a forum feature on my site. I am taking the innerHTML of the textarea and storing it inside a SQL database.
I retrieve the markup when viewing thread posts.
Is there any security concerns doing what I am doing? Does tinymce have any inbuilt features to stop malicious content / markup being added and therefore saved?
TinyMce does a pretty good job ant content scrubbing and input cleanup (on the client side). Being a very popular web rich text editor, the creators have spent a lot of work on making it fairly secure in terms of preventing simple copy and paste of malicious content in to the editor. You can do things like enable/disable cleanup, specify what tags/attributes/characters are allowed, etc.
See the TinyMce Configurations Page. Options of note include: valid_elements, invalid_elements, verify_html, valid_styles, invalid_styles, and extended_valid_elements
Also: instead of grabbing the innerHtml of the textarea, you should probably use tinyMce's getContent() function. see: getContent()
BUT this is all client-side javascript!
Although these featuers are nice, all of this cleanup still happens on the client. So conceivably, the client JS could be modified to stop escaping/removing malicious content. Or someone could POST bad data to your request handlers without ever going thru the browser (using curl, or any other number of tools).
So tinyMce provides a nice baseline of client scrubbing, however to be secure, the server should assume that anything it is being sent is dirty and should thus treat all content with caution.
Things that can be done by the server:
Even if you implement the most sophisticated client-side validation/scrubbing/prevention, that is worthless as far as your backend's security is concerned. An excellent reference for preventing malicious data injections can be found on the OWASP Cross Site Scripting Prevention Cheat Sheet and the OWASP SQL Injection Prevention Cheat Sheet. Not only do you have to protect against SQL injection type attacks, but also XSS attacks if any user submitted data will be displayed on the website for other unsuspecting users to view.
In addition to sanitizing user input data on the server, you can also try things such as mod_security to squash requests that contain certain patterns that are indicative of malicious requests. You can also enforce max length of inputs on both the client and server side, as well as adding a max request size for your server to make sure someone doesn't try and send a GB of data. How to set a max request size will vary from server to server. Violations of max request size should result in a HTTP 413/Request Entity Too Large
Further to #jCuga's excellent answer, you should implement a Content Security Policy on any pages where you output the rich text.
This allows you to effectively stop inline script from being executed by the browser. It is currently supported by modern browsers such as Chrome and Firefox.
This is done by a HTTP response header from your page.
e.g.
Content-Security-Policy: script-src 'self' https://apis.google.com
will stop inline JavaScript from being executed if a user managed to inject it into your page (it will be ignored with a warning), but will allow script tags referencing either your own server or https://apis.google.com. This can be customised to your needs as required.
Even if you use a HTML sanitizer to strip any malicious tags, it is a good idea to use this in combination with a CSP just in case anything slips through the net.

source map HTTP request does not send cookie header

Regarding source maps, I came across a strange behavior in chromium (build 181620).
In my app I'm using minified jquery and after logging-in, I started seeing HTTP requests for "jquery.min.map" in server log file. Those requests were lacking cookie headers (all other requests were fine).
Those requests are not even exposed in net tab in Developer tools (which doesn't bug me that much).
The point is, js files in this app are only supposed to be available to logged-in clients, so in this setup, the source maps either won't work or I'd have to change the location of source map to a public directory.
My question is: is this a desired behavior (meaning - source map requests should not send cookies) or is it a bug in Chromium?
The String InspectorFrontendHost::loadResourceSynchronously(const String& url) implementation in InspectorFrontendHost.cpp, which is called for loading sourcemap resources, uses the DoNotAllowStoredCredentials flag, which I believe results in the behavior you are observing.
This method is potentially dangerous, so this flag is there for us (you) to be on the safe side and avoid leaking sensitive data.
As a side note, giving jquery.min.js out only to logged-in users (that is, not from a cookieless domain) is not a very good idea to deploy in the production environment. I;m not sure about your idea behind this, but if you definitely need to avoid giving the file to clients not visiting your site, you may resort to checking the Referer HTTP request header.
I encountered this problem and became curious as to why certain authentication cookies were not sent in requests for .js.map files to our application.
In my testing using Chrome 71.0.3578.98, if the SameSite cookie atttribute is set to either strict or lax for a cookie, Chrome will not send that cookie when requesting the .js.map file. When there is no sameSite restriction, the cookie will be sent.
I'm not aware of any specification of the intended behavior.

Categories

Resources