Privacy - Track Chrome extension's outgoing AJAX queries - javascript

Is there any possible way to track a Chrome extension's outgoing network communication from a website?
Let's assume, that a Chrome 'content script' extension sends AJAX queries to a server on a specified IP to create custom analytics. This extension works in the browser while the user browses through various websites.
Is there any possibility for these websites to track what the extension does ( that it opens AJAX ) or where it sends data to? ( To which IP it was trying to send AJAX query )
UPDATE
To be clear, I am curious about an independent third-party website's tracking abilities, not the extension-user's.
UPDATE
More clarification: the extension is sending request to a server not related to the servers/websites the user is browsing.
EXAMPLE
User is browsing Youtube, and Facebook daily. This extension sends AJAX queries to a storage server where the user's visited URL-s are stored. ( Youtube and Facebook ). What I would like to know is, does f.e. Facebook know, that this extension does this, and what's the IP of the storage server?

Basically, no, because of the concept of isolated world. Emphasis mine:
Content scripts execute in a special environment called an isolated world. They have access to the DOM of the page they are injected into, but not to any JavaScript variables or functions created by the page. It looks to each content script as if there is no other JavaScript executing on the page it is running on. The same is true in reverse: JavaScript running on the page cannot call any functions or access any variables defined by content scripts.
So if you were thinking of doing something like overriding XMLHttpRequest, this would not work, as a content script has a "safe harbour" you can't touch.
And that's even before the possibility to delegate network operations to the background script, which is a completely different origin.
There is an exception to this: an extension can sometimes inject code directly into the page context. Then it coexists with the website JavaScript and in theory one can spy on another. In practice, however, an extension can execute its code before any of the website's code has a chance to react, and therefore stealth / shield itself from interference.

Maybe this is overkill but you can try to sniff your own traffic using Wireshark (or any other program) and have a look at the requests. If they are using https then things will be harder and you will have to decrypt the traffic.

Related

Use JavaScript to crawl a website -> Possible and which IP is shown on the crawled site

it is possible to crawl a website within an Angular-App? I am speaking about to call a website from Angular, not crawling an Angular-App. If that so, then I am wondering which IP will be shown on the crawled website. Since JavaScript is client-side, I would suggest, its the IP of the client, not of the server (like probably at nodejs). But all I know, its mostly browser-implemented stuff what we can use in JS, so it is even possible to crawl websites with methods from JavaScript (or Angular)?
Best Regards
Buzz
In theory, you can create an AJAX request to fetch the data with reponse type text/html. That would give you the remote document as a string. The browser wouldn't try to load the JavaScript and CSS in that document, though. That might not be a problem but CORS is. For security reasons, most browsers prevent you from loading data from somewhere else (otherwise, it would be too easy for criminals to put JavaScript into any web page). See here for details: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
If you have control over the second domain, you can configure the server there to send Access-Control-Allow-Origin headers to the browser to allow access from the Angular App.
Note: You could use an iframe to load the other website but when the domains of the current document and the one in the iframe don't match, then you can't access the contents of the iframe from JavaScript.
One way to work around this is to install a proxy on your server. The browser can then ask your server for the pages in question. In this case, the remote web site will get the IP of your server.

Detect Javascript Tampering in Ajax call

We have a Javascript file that we have developed for our clients to use. The Javascript snippet takes a screenshot of the website it is run on and then sends it back to our server via jQuery.post()
The nature of our industry means that we have to ensure there is no way that the file can be tampered with by the client.
So the challenge is that we need to make sure that the screenshot was generated by the javascript file hosted on our server, and not one that's been copied or potentially tampered with in any way.
I know that I can get the script location using:
var scripts = document.getElementsByTagName("script"),
src = scripts[scripts.length-1].src;
But this won't help if a client tampers with that part of the SRC.
What methods can I employ to make sure that:
1) The post was made from the javascript file hosted on our server
2) The javascript was not tampered with in any way.
Short answer:
You can't.
You can't.
Both stem from the fact that once you hand over something to the client side, it's out of your hands. Nothing will prevent the user from putting a proxy between you and their machine, a process that intercepts content or an extension that tampers content, headers, cookies, requests, responses etc.
You could, however, harden your app by preventing XSS (prevent injection of scripts via user input), using SSL (prevent tampering of the connection), applying CSP (only allow certain content on the page), add CSRF tokens (ensure the form is authorized by the server) and other practices to make it harder for tampered content to get through.
But again, this won't prevent a determined hacker to find an opening.

Prevent local PHP/HTML files preview from executing javascript on server

I have some HTML/PHP pages that include javascript calls.
Those calls points on JS/PHP methods included into a library (PIWIK) stored onto a distant server.
They are triggered using an http://www.domainname.com/ prefix to point the correct files.
I cannot modify the source code of the library.
When my own HTML/PHP pages are locally previewed within a browser, I mean using a c:\xxxx kind path, not a localhost://xxxx one, the distant script are called and do their process.
I don't want this to happen, only allowing those scripts to execute if they are called from a www.domainname.com page.
Can you help me to secure this ?
One can for sure directly bypass this security modifying the web pages on-the-fly with some browser add-on while browsing the real web site, but it's a little bit harder to achieve.
I've opened an issue onto the PIWIK issue tracker, but I would like to secure and protect my web site and the according statistics as soon as possible from this issue, waiting for a further Piwik update.
EDIT
The process I'd like to put in place would be :
Someone opens a page from anywhere than www.domainname.com
> this page calls a JS method on a distant server (or not, may be copied locally),
> this script calls a php script on the distant server
> the PHP script says "hey, from where damn do yo call me, go to hell !". Or the PHP script just do not execute....
I've tried to play with .htaccess for that, but as any JS script must be on a client, it blocks also the legitimate calls from www.domainname.com
Untested, but I think you can use php_sapi_name() or the PHP_SAPI constant to detect the interface PHP is using, and do logic accordingly.
Not wanting to sound cheeky, but your situation sounds rather scary and I would advise searching for some PHP configuration best practices regarding security ;)
Edit after the question has been amended twice:
Now the problem is more clear. But you will struggle to secure this if the JavaScript and PHP are not on the same server.
If they are not on the same server, you will be reliant on HTTP headers (like the Referer or Origin header) which are fakeable.
But PIWIK already tracks the referer ("Piwik uses first-party cookies to keep track some information (number of visits, original referrer, and unique visitor ID)" so you can discount hits from invalid referrers.
If that is not enough, the standard way of being sure that the request to a web service comes from a verified source is to use a standard Cross-Site Request Forgery prevention technique -- a CSRF "token", sometimes also called "crumb" or "nonce", and as this is analytics software I would be surprised if PIWIK does not do this already, if it is possible with their architecture. I would ask them.
Most web frameworks these days have CSRF token generators & API's you should be able to make use of, it's not hard to make your own, but if you cannot amend the JS you will have problems passing the token around. Again PIWIK JS API may have methods for passing session ID's & similar data around.
Original answer
This can be accomplished with a Content Security Policy to restrict the domains that scripts can be called from:
CSP defines the Content-Security-Policy HTTP header that allows you to create a whitelist of sources of trusted content, and instructs the browser to only execute or render resources from those sources.
Therefore, you can set the script policy to self to only allow scripts from your current domain (the filing system) to be executed. Any remote ones will not be allowed.
Normally this would only be available from a source where you get set HTTP headers, but as you are running from the local filing system this is not possible. However, you may be able to get around this with the http-equiv <meta> tag:
Authors who are unable to support signaling via HTTP headers can use tags with http-equiv="X-Content-Security-Policy" to define their policies. HTTP header-based policy will take precedence over tag-based policy if both are present.
Answer after question edit
Look into the Referer or Origin HTTP headers. Referer is available for most requests, however it is not sent from HTTPS resources in the browser and if the user has a proxy or privacy plugin installed it may block this header.
Origin is available for XHR requests only made cross domain, or even same domain for some browsers.
You will be able to check that these headers contain your domain where you will want the scripts to be called from. See here for how to do this with htaccess.
At the end of the day this doesn't make it secure, but as in your own words will make it a little bit harder to achieve.

"The owner of this website has banned your access based on your browser's signature" ... on a url request in a python program

When doing a simple request, on python (Entought Canopy to be precise), with urllib2, the server denies me access :
data = urllib.urlopen(an url i cannot post because of reputation, params)
print data.read()
Error:
Access denied | play.pokemonshowdown.com used CloudFlare to restrict access
The owner of this website (play.pokemonshowdown.com) has banned your access based on your browser's signature (14e894f5bf8d0920-ua48).
This is a apparently a generic issue, so I found several clues on the web.
https://support.cloudflare.com/hc/en-us/articles/200171806-Error-1010-The-owner-of-this-website-has-banned-your-access-based-on-your-browser-s-signature:
A firewall, proxy, a browser plugin or extension may be throwing a false positive. Try visiting the site with a different browser as an alternative way of accessing the site.
https://support.cloudflare.com/hc/en-us/articles/200170176-Why-am-I-getting-a-Checking-your-Browser-before-accessing-message-before-entering-a-site-on-CloudFlare-:
The "Checking your browser before accessing (insertsite.com) occurs when the site owner has turned on a DDoS protection and mitigation tool called "I'm Under Attack". The page will generally go away and grant you access to the site after 5 seconds.
Note: You will need to have both JavaScript and Cookies turned on in your browser to pass the check. The check is in place to make sure that you are not part of a botnet."
The answers are rather clear, except for this one thing ... *I'm not using any browser! The request is done trough a python program, with an urllib.urlopen request ...
Does this mean I'm supposed to have, like, cookies and JavaScript turned on in ... Enthought Canopy? Does this sentence makes any sentence at all? I barely understand anything about this browser specific check activating when trying to access the site with a basic request from a programming console. And that's why I ask for your help.
Why does it happen? How to bypass it?
What this site is "checking" is not your browser, it's the "user agent" - a string your client program (browser, Python script or whatever) eventually sends as a request header. You can specify another user agent, cf Changing user agent on urllib2.urlopen.
I just saw it with Safari from my home IP, looking at a site I author! After performing a login to cloudflare website and hitting refresh its back. Probably my mobile internet was too slow (in New Zealand) and the javascript did not load in time? I have DDOS protection and "under attack" enabled AFAIK.

Is it possible for a user to modify site javascript in browser?

I don't know a lot about security, but I'm trying to figure out how to keep my site as safe as possible. I understand that as much stuff that I can handle on the backend the better, but for instances where I'd like to hold some variables on the client, is that stuff utterly unchangeable?
For instance, if I set a global variable to the user's role (this is a pure AJAX web app so a global variable is always available), is it possible by any software to edit javascript within the browser so that a user might change their role?
Security is a big topic in the web development world and it is important for you to determine how secure your web application should be.
There are 3 parts for you to notice
Frontend (the website)
everything here is insecure, whatever shown on you in the browser could be changed or modified. Just go to the debug console and you could change the variable and rerender the html page again
Transportation level (https)
web communication is based on the http(s) protocol that allows you to communicate between your server and client. Using https will prevent you from man in the middle attack
Backend
always make sure to authenticate and check the data sent from your client (POST, PUT, DELETE)
Prevention
The good thing is even you change the variable of the client side, it only appears on that client session and doesn't effect any others. There are few ways you could increase your security in your frontend
Obfuscate
this means make your source code harder to read. You could try to use tool like minification, and concatatenata your source code.
Data
Never ever store user sensitive data (password, user info) in the client side, since people could just change and see it
This should get you learning more about security
With developer tools if you run in debug mode with breakpoints then you can change values of variables.
All data that goes to the client can be viewed and tampered. There are alot of ways to do that (Developer Tools, HTTP Proxy, ...).

Categories

Resources