What am I building is simple chrome extension which adds my script to all pages user visit like this
in contentscript.js:
var s = document.createElement('script');
s.src = 'https://localhost:3000/js/hack.js';
s.onload = function() {
this.parentNode.removeChild(this);
};
(document.head||document.documentElement).appendChild(s);
on rails-backend there is simple login-password and omniauth(oauth) authorization via devise gem.
If I authorize on rails pages and later in hack.js I try to make some ajax calls to my local server via XMLHttpRequest - it tries its best to assure me that user is not authorized.
I bet it can be solved via opening invisible iframe somewhere, but that's hellova pain, may be there are some more handy methods?
To do an authorized request, you probably have to get your script in an XHR, where you can add various auth headers. Then the script can be injected as inline code instead of supplying an src.
Do note that content scripts can do cross-domain XHR for all sites that have host permissions in the manifest, even if the page is not allowed to do it.
Since the page's Content Security Policy can potentially forbid injection of a script with your URL in the src, but injecting it as inline script sort of bypasses CSP, it's a superior method anyway.
Related
I have a Single Page Application written in Angular using a Workbox powered service worker. The static app files are served with a Content-Security-Policy header containing a nonce. Static app files (such as index.html) which contain <script> tags are decorated with a nonce attribute (<script nonce="FCNjs05n4eQdfn39fn3c9h5segb">).
Since nonce values must not be used more than once, I'm under the assumption that the Service Worker should handle generating new nonce values for each request to static app files by modifying responses delivered from the precache. So, when a request to index.html is made, the Service Worker should generate a nonce, modify the CSP header's script-src directive to contain the new nonce, and index.html must have all nonce="" attributes updated to the new nonce as well.
I am assuming this is necessary, as reusing the same nonce value may pose a security risk; however, I am unsure of this as there are no clear security recommendations for this specific scenario. I have found a code example in which the author demonstrates generating a new nonce in the Service Worker using the current Date and the requested file's name (and to my reasoning is not secure, as it does not use cryptographically strong pseudo-random bytes).
It should also be noted that I cannot use hashes, as I ran into a hurdle with Mozilla Firefox and Safari. Firefox will only calculate and compare the hash of inline script tag contents with the hashes present in your Content-Security-Policy header. The hash of external JavaScript sources (e.g. <script src="js/foo.js"></script>) is NOT calculated and compared with the hashes you put in your CSP header. This complicates building secured applications, as it requires me to rely on a different security mechanism (nonces), which in turn is requiring me to take extra steps in my server and service worker to rotate these nonce in both headers and the delivered content.
Main problem is hidden not in security of generating nonce but in that there is no way to apply nonce generated by service worker (at least I do not see it).
As I understand, an HTTP headers are not stored for cached pages, hence CSP HTTP header will be ignored when page restored from cache. Adding a meta tag into cached page (if it technically is possible) leads that it will be a only one CSP delivered, therefore it should works as expected.
On the other hand , if you add/remove/change <meta http-equiv='Content-Security-Policy' content=''> meta tag, browser does remember and apply all previous policies, see the test. As a result you'll just have a multiple CSPs at one time.
You need to reload page to clear the browser "memory", it's main the SPA's problem with CSP.
But browser behaviour when modifying of cached pages by script, need to be checked additionally. AFAIK there is a method to bypass CSP and nonces via cached pages.
Update
In some cases you can use a workaround for external script hashes in Firefox and Safari. You can use an inline script to load the external scripts, like that:
var external = document.createElement("script");
external.src = "http://example.com/script.js"
external.setAttribute("type", "text/javascript");
document.head.appendChild(external);
var external2 = document.createElement("script");
external2.src = "http://example.com/script2.js"
external2.setAttribute("type", "text/javascript");
document.head.appendChild(external2);
The above inline script can be allowed through 'hash-value' paired with 'strict-dynamic' (it's a variant of Google's strict CSP):
script-src 'sha256-hash_of_inline_script' 'strict-dynamic' https:
A 'strict-dynamic' token does allow to load any scripts loaded by legitimate inline script.
Safari does not support 'strict-dynamic', therefore it will use https: scheme-source to allow external scripts.
Chrome and Firefox do support 'strict-dynamic', therefore https: will be ignored.
Yes Safari users will be less safe, but you can use real host-sources (https://example.com https://CDN.com) instead of http:.
I'm using Firefox console to run some JS from a github page (https://github.com/user/repo/pull/1/files specifically).
I'd like to be able to load the content of a file of the repo, and for that I'm using
var client = new XMLHttpRequest();
client.open("GET", "https://github.com/user/repo/raw/master/path-to-file", true);
client.onreadystatechange = function() {
alert(client.responseText + client.responseURL);
}
client.send();
The problem is that the content actually is at https://raw.githubusercontent.com/user/repo/master/path-to-file and the url I'm loading (which is the one the Raw button points to in the github webpage) is redirecting to that one.
Firefox errors in the console with:
Content Security Policy: The page’s settings blocked the loading of a resource at
https://raw.githubusercontent.com/user/repo/master/path-to-file
(“connect-src https://github.com https://uploads.github.com
https://status.github.com https://collector.githubapp.com
https://api.github.com https://www.google-analytics.com
https://github-cloud.s3.amazonaws.com wss://live.github.com”).
Is there a way to load that file in some way?
I think I need to run the request from the github page so that a file is accessible even if it's in a private repo (since the session of the user is used when sending the request).
I'm sending a request to a page that it's in the CSP, and that page does the redirect, so I don't see why the CSP should block me as I didn't try to access an "unauthorized" resource.
The alternative would be to load the github web page at https://github.com/user/repo/blob/master/path-to-file, but that means I'd have to parse it to remove all the displaying tags, and it's brittle as every change to github might break the script.
Creating a script dynamically like below will download the JavaScript source asynchronously.
var script = document.createElement('script');
script.src = src_url;
var first_script = document.getElementsByTagName('script')[0];
first_script.parentNode.insertBefore(script, first_script);
What type of request object is created under the hood ? XMLHttpRequest objects are used for asynchronous data exchange with AJAX. Is it the same object used for asynchronous script loading with dynamic script tags? If so, does the CORS(Cross Origin Resource Sharing) issue applies here too ?
XMLHttpRequest objects are used for asynchronous data exchange with AJAX. Is it the same object used for asynchronous script loading with dynamic script tags?
No, the browser just loads them as it does scripts in general.
If so, does the CORS(Cross Origin Resource Sharing) issue applies here too ?
No. CORS applies to XHR calls and cross-origin access, not to loading scripts via script tags. That's why JSONP works.
When you load JavaScript into a page, it doesn't matter where you load it from, it runs in the security context of the page that loaded it. So for instance, if you have a page at http://example.com/foo.html and it loads a script from http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js, that's fine (and it's how CDNs like Google's and Microsoft's and such work, allowing us to load common libraries from there rather than from our own servesr). If the code loaded by that script attempts to do XHR calls, the origin that applies is http://example.com, not http://ajax.googleapis.com. Similarly, if that script tries to access a window from another origin (perhaps the http://example.com page has an iframe in it from http://somewhereelse.com), again the origin that applies is http://example.com and so the cross-origin script access is denied.
I know that this has been talked about many times here, and I have read most of these threads but I can't seem to get my script working.
Problem is that I am trying to use bitly api to shorten urls in google chrome extension. I am saving users login and apiKey in localstorage and before I do so I validate them.
The code to do so is:
$.ajax({
url:"http://api.bit.ly/v3/validate",
dataType:'jsonp',
data:{
login: login,
apiKey: apiKey,
x_login :"test",
x_apiKey :"test"
},
success:function (jo, textStatus, jqXHR) {
if (jo.status_code == 200) {
setItem('dg_BitlyApiKey', apiKey);
setItem('dg_BitlyLogin', login);
alert('Saved');
} else {
alert('Incorrect login and/or apiKey!')
}
}
});
I do have my permissions set to "permissions": ["tabs", "notifications", "http://*/*", "https://*/*"] but I still keep getting:
Refused to load script from 'http://api.bit.ly/v3/validate?callback=jQuery17204477599645033479_1334062200771&login=&apiKey=&x_login=test&x_apiKey=test&_=1334062201506' because of Content-Security-Policy.
The script itself works outside the extension so I assume the problem isn't within the script but with the permissions.
What am I doing wrong here?
The problem is that you aren't really doing a XHR request, you're doing a JSONP request on an insecure HTTP resource. See the question How to load an external JavaScript inside an extension popup and the related Chromium bug report.
Yeah, we're no longer allowing insecure scripts in extensions. If you load a script over HTTP, an active network attacker can inject script into your extension, which is a security vulnerability.
JSONP operates by dynamically adding a new script tag into your page and then executing the contents. In your case, the script resource is fetched over HTTP (instead of HTTPS). If your extension uses version 2 of the extension manifest, its background pages cannot fetch non-HTTPS scripts.
Solution: If you use the Bitly API over HTTPS, I believe that will fix your issue. Send your Ajax call to https://api-ssl.bitly.com/v3/validate (instead of your current value of http://api.bit.ly/v3/validate)
You need to package your app/extension for cross domain requests to work. A hosted application will not be able to do cross domain requests. See:
Cross-Origin XMLHttpRequest in chrome extensions
To make Cross-Origin Requests in Chrome Extension you need to Avoid Cross-Origin Fetches in Content Scripts.
Full answer you can found in
https://stackoverflow.com/a/56929473/3680164
Or in the documentation
https://www.chromium.org/Home/chromium-security/extension-content-script-fetches
How does the Same Origin Policy apply to the following two domains?
http://server1.MyDomain.com
http://server2.MyDomain.com
Can I run JS on a page hosted on server1, if the content is retreived from server2?
edit according to Daniel's answer below, I can include scripts between different subdomains using the <script> tag, but what about asynchronous requests? What if I download a script from server2 onto the page hosted on server1. Can I use the script to communicate asynchronously with a service on server2?
You can only include scripts between different subdomains using the <script> tag, as it is exempt from the policy.
Using http://www.example.com/dir/page.html as source (from Wikipedia):
Compared URL Outcome Reason
---------------------------------------------------------------------------------------------
http://www.example.com/dir/page.html Success Same protocol and host
http://www.example.com/dir2/other.html Success Same protocol and host
http://www.example.com:81/dir2/other.html Failure Same protocol and host but different port
https://www.example.com/dir2/other.html Failure Different protocol
http://en.example.com/dir2/other.html Failure Different host
http://example.com/dir2/other.html Failure Different host (exact match required)
http://v2.www.example.com/dir2/other.html Failure Different host (exact match required)
UPDATE:
Can I use the script to communicate
asynchronously with a service on
server2?
Yes, you can with JSONP, which takes advantage of the open policy for <script> tags to retrieve JSON from other origins.
You may also want to consider using a reverse proxy, as desribed in the following Stack Overflow post:
What am I missing in the XMLHttpRequest?
Sure, you can run any script that you insert on your never mind where it comes from. Think about how to insert a google map on your page.
What your describe is a pattern called jsonp. Where a server on a other host returns a script you insert in your page and the script calls a function in your page with the response arguments.