I implemented reCaptcha. After the "I am not a robot" checkbox is clicked, a token is getting generated from google.
Client side (js)
function checkCaptchaAndSubscribe(thisContext)
{
var captchaResponse = grecaptcha.getResponse();
if (captchaResponse == "") {
$captchaRequired.css('display', 'block');
return false;
}
grecaptcha.reset();
$captchaRequired.css('display', 'none');
jQuery.ajax({
url: "/black_newsletter2go/index/verify",
method: "POST",
async: "true",
data: {
recaptchaResponse: captchaResponse
},
success: function(response) {
$statusContainer.show();
if (response != "success") {
$status.html("<h2 class='nl2go_h2'>Die Captcha Validierung ist fehlgeschlagen!</h2>");
return false;
}
subscribe(thisContext);
}
});
}
I send the token to my server by using ajax and validate it there like this:
Server side (php):
public function verifyAction()
{
$captchaResponse = $this->getRequest()->getParam('recaptchaResponse');
if (!isset($captchaResponse) || empty($captchaResponse)) {
return "captcha response is empty";
}
$secretKey = Mage::Helper("recaptcha")->getSecretKey();
$url = 'https://www.google.com/recaptcha/api/siteverify';
$data = array(
'secret' => $secretKey,
'response' => $captchaResponse,
);
// use key 'http' even if you send the request to https://...
$options = array(
'http' => array(
'header' => "Content-type: application/x-www-form-urlencoded\r\n",
'method' => 'POST',
'content' => http_build_query($data)
)
);
$context = stream_context_create($options);
$result = file_get_contents($url, false, $context);
$result = json_decode($result);
//var_dump($result);
//exit();
if ($result->success
&& (strpos(Mage::getBaseUrl(), $result->hostname) !== false)) {
echo "success";
} else {
echo "fail";
}
}
This is the output of the object $result
It returns success if the checks were successfull, otherwise fail.
But is this enough? What if the attacker uses a HTTP proxy like burpsuite to change the response to success? Then he can bypass my checks and always get through? Or am I wrong?
It uses a key pair to encrypt/decrypt the info. So it's sending the info encrypted. That's why it can't be tempered with, but of course, that means you must make sure to not get the private key stolen.
There the server knows and saves the state in its storage so if the client tries to use it as "success" when it was "fail", the server will know, no matter what. So for the hacker to change the value for the client is not likely to do much, it will depend on your code, of course. If you are using that reCAPTCHA to log the user in, then obviously that login attempt will fail on the server side if the reCAPTCHA returned "fail". So whether the client is told "success" or not, it still won't be logged in. The client should never be the keeper of such a state since it can't be trusted (it can always have tainted data.)
It works in a way similar to what you'd do between a browser and a server using HTTPS.
The communication between your client and your server should also be on HTTPS to avoid some easier man in the middle (MITM) problems. However, it is always possible to have someone who becomes a proxy, which is how most MITM work, and in that case, whatever you're doing can be changed by the MITM.
The one thing that the MITM can't do is create a valid certificate for the final destination, however. In that sense, there is a protection, but many people don't verify certificates each time they connect to a website. One technique, though, has been for MITM to not give you HTTPS, only him and your server would use HTTPS and the client would remain on HTTP. Although your code could detect such, obviously the MITM can also change that code. Similarly, having a cookie set with Http-Only and Secure can enhance the security, but that too can be intercepted by a MITM.
Since the MITM can completely change your scripts, there is pretty much nothing you can do on the client's side that would help detect such a problem and on the server side, you will receive hits that look like what the client sent to you. So again, no real way to detect a MITM.
There is a post that was asking that very question: could I detect an MITM from the server side? It's not impossible, but it's rather tricky. There are solutions being put in place by new implementations/extensions to the normal HTTP solution, but those require an additional application that connects to a different system and there is no reason why such could not also be proxied by a MITM once enough people use such solutions.
The result comes from an URL that is owned by Google.
If user tampered with what is being send to your php script - then the google webservice will return a failure and you won't pass such request through.
Related
I have an AJAX function that makes call to a page on my website.
$(document).on('click', thisIdentity, function() {
var trigger = $(this);
var items = trigger.attr('data-values').split('_');
$.ajax({
type: "POST",
url: "/mod/mypage.php",
data : { pid : item[0], uid : item[1] },
dataType: "json",
success: function(data) {
if(data.job == 1) {
// do something
}
}
});
});
Now this works fine and do as intended. However, if I use any third-party app like POSTMAN and make a POST request to www.xyz.com/mod/mypage.php with parameters pid : 1 and uid : 2. It still goes through and make changes to my database.
Is there anyway to check that request is generated from my
domain/server only?
How to stop such POST requests outside from my domain?
One thing I thought was to generate a token and set in SESSION before this request and check in mypage.php that if token is set or not. Is this a feasible way?
This is exactly what a CSRF token is for. Users must navigate to the page first, which generates a token to submit, ergo without navigating to the page will render any POST requests invalid.
However, trying to stop someone from POST'ing a request to your endpoint from a utility like POSTman is an exercise in futility. You must authenticate every request to the endpoint, in this case just check the photo id is owned by the submitting client.
OWASP provides a decent description of what a CSRF is:
Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they're currently authenticated. CSRF attacks specifically target state-changing requests, not theft of data, since the attacker has no way to see the response to the forged request.
Example validation flow
Login.php
<?php
// Establish DB connection, validate
$_SESSION['id'] = $db->getUserId();
$_SESSION['admin'] = $db->getAdminStatus();
Delete.php
<?php
if (!$db->isPhotoOwner($_POST['pid'])) {
exit;
}
// Delete photo flow
Admin.php
<?php
if (!$_SESSION['admin']) {
die("Not admin.");
}
// Do admin action or whatever
You could have the calling page identify itself with $_SERVER['SCRIPT_NAME'] and write that value to a hidden input field or $_POST and check for it at the beginning of processing. Any confirmed value you choose might work.
If they are already imitating the data of your JSON, then maybe drop it into the javascript with PHP dynamically writing the code value on page serving.
I want my user to download a file which my script generates and put it on their server (this part has been built successfully). The goal is to verify that the user has the ability to upload files to the website they claim they own. I will be checking the root of the website so an example would be http://www.google.com/file
I then want my script to check if the file is present on their server. I figured, I could use some javascript to check if the domain of the user combined with a file path would return any different HTTPresponse than 404.
SO I looked around on the internet and tried a few things. Now here is the resulting function :
/* DUMMY */
url = 'http://www.google.com/';
xhr = new XMLHttpRequest();
xhr.open("HEAD", url,true);
xhr.onreadystatechange=function() {
alert("HTTP Status Code:"+xhr.status)
}
xhr.send(null);
The url I used should exist. This should result in a 200 (or something along the lines of it exists). However, for most URL's I'll get an error 0 and following error: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost' is therefore not allowed access.
Could anyone help me out with my script?
I would suggest use php to check the same. If you are wondering about the issue you are getting read about CORS.
This is a simple example to do it
$file = 'http://www.domain.com/somefile.jpg';
$file_headers = #get_headers($file);
if ($file_headers[0] == 'HTTP/1.1 404 Not Found') {
$exists = false;
}
else {
$exists = true;
}
From here : http://www.php.net/manual/en/function.file-exists.php#75064
You need Cross-Origin Resource Sharing enabled on the destination server(such as google.com where your file is).
To prevent vulnerabilities, you cannot execute JavaScript on just any foreign server. You can only do so on a server you own, by explicitly adding code in the config settings to enable requests from your client server.
I would suggest (because it's the most portable solution) to put a proxy script on your server. Something along the lines of
<?php
$url = filter_var($_GET['url'], FILTER_VALIDATE_URL);
if ($url) {
$ch = curl_init($url);
$res = curl_exec();
$code = curl_getinfo($ch, CURLINFO_HTTP_CODE);
echo json_encode(Array('success' => 1, 'status' => $code));
}
else {
echo json_encode(Array('success' => 0, 'status' => 000000));
}
You can then use XMLHTTPRequest and JSON.parse() on the Javascript side to analyze the result. You can also use the code to provide you additional data about the remote server that could always be useful.
I have a litle shop basket where I can add products.
Here is my index.php
...<script type="text/javascript" src="function.js"></script>
<a title="Add to basket" onclick="add_product_to_cart('apple','1');" href="#">Apple</a><br>...
Here is the functions.js
function add_product_to_cart(item, id) {
var item = item;
var id = id;
$.ajax({
url: "ajax.php",
type: "POST",
data: {
action: "add",
name: item,
id: id
},
success: function(data) {
//do something
}
});
};
When I click on "Apple", the parameters are send to the ajax, and they are visible for example in the Firefox-Web-Developer.
Is there a chance to hide these POST-parameters? Maybe to protect it from attacs from outside? Is my thinking maybe totally wrong how to add it to the basket? Thanks for any help!!
Here is a screenhot from my web-developer.
Web-Developer Firefox
You can't hide what is being logged in the Network tab of Chrome Developer Tools. Even if you could, a hacker could sniff the requests using Fiddler or other web proxy. Client side validation is nice, but not the end all. Most people wouldn't be trying to send requests to your server illegitimately but some will I suppose.
You really should be doing server side validation that data sent to the server is indeed valid. Don't rely on the client to do this as anyone can modify what is sent directly to the server. In your PHP code, you would do something like this:
function validate_data($data)
{
// other code here
if(!is_discontinued($data['product_id']))
add_to_cart($data['product_id']);
// other code after
}
function is_discontinued($product_id)
{
// do database query
$is_discontinued = lookup_product($product_id);
return $is_discontinued;
}
This is very barebones, but it should give you the idea of what needs to be done.
EDIT:
After looking at some of your recent comments, you may also like to include CSRF tokens to make sure that requests originate from your domain. These tokens are generated on the server and often stored in hidden fields in the form to be sent back to the server with each request. Then you validate the token on the server and after it passes validation, you perform your action.
Note, this will only slow down most hackers, but it can deter some who aren't dead set on performing illegitimate requests.
In terms of sending the value with AJAX requests, you would need to select your hidden field and add its value to the POST data. Your AJAX request would then look something like this:
function add_product_to_cart(item, id) {
var item = item;
var id = id;
$.ajax({
url: "ajax.php",
type: "POST",
data: {
action: "add",
name: item,
id: id,
token: $('#csrf_token').val()
},
success: function(data) {
//do something
}
});
};
On the server (PHP), you would have something like this:
function get_csrf_token()
{
$token = md5(uniqid(rand(), TRUE));
if (!isset($_SESSION['token'])) {
$_SESSION['token'] = $token;
}
else
{
$token = $_SESSION['token'];
}
return $token;
}
function valid_csrf_token()
{
if(isset($_POST['token'])){
if($_POST['token'] == $_SESSION['token'])
return true;
else
return false;
}
else {
return false; // no token was sent with the request
}
}
Then in your form, you would have your hidden field like this:
<input id="csrf_token" type="hidden" value="<?php get_csrf_token(); ?>" />
Finally, your original PHP validation function would include the CSRF token validation:
function validate_data($data)
{
// other code here
if(!is_discontinued($data['product_id']) && valid_csrf_token())
add_to_cart($data['product_id']);
else
header('HTTP/1.1 400 Bad Request', true, 400); // set status to bad request
// other code after
}
Note, setting the status to bad request is optional, but it will show the request was not as expected.
The answer is: No, there is no way you can hide the data you're sending to server using AJAX.
But this shouldn't be a problem, since you MUST validate everything on server.
You can validate things on client-side (for normal users) to have a easier/faster response on client, and to get less traffic on your server. But, as said above, you must revalidate everything on server, cause this is the only way you can ensure that, even if malicious data is sent to your server, your website will still work as expected.
Btw, you can even block/ban the users that are trying to make something different from what your not-modified client code usually does.
I am trying to access utorrents web api, it uses a token authentication system which is detailed here
the JavaScript on my page is
<script>
$.getJSON("http://XXX.XXX.XXX.XXX/lib/token.php", function(response) {
var head = document.getElementsByTagName('head')[0];
var script = document.createElement('script');
script.type = 'text/javascript';
//script.onreadystatechange = function () {
// if (this.readyState == 'complete') utorrent();
//}
//script.onload = utorrent();
script.src = 'http://XXX.XXX.XXX.XXX:8080/gui/?list=1&token=' + response.token;
head.appendChild(script);
});
</script>
simply retrieving the token from a php file and passing it along the chain, i have confirmed that the token is being passed and is not being poisonned, my PHP document is below
<?php
header('Content-type: text/json');
$token = file_get_contents('http://[username]:[password]#XXX.XXX.XXX.XXX:8080/gui/token.html');
$token = str_replace("<html><div id='token' style='display:none;'>", "", $token);
$token = str_replace("</div></html>", "", $token);
$response = array('token' => $token);
echo json_encode($response);
?>
this gives me a confirmation of the token
Object {token: "GMt3ryaJE64YpXGN75-RhSJg-4gOW8n8XfTGYk_ajpjNLNLisR3NSc8tn1EAAAAA"}
but then i receive a 400 error code when retrieving the list
GET http://XXX.XXX.XXX.XXX:8080/gui/?list=1&token=GMt3ryaJE64YpXGN75-RhSJg-4gOW8n8XfTGYk_ajpjNLNLisR3NSc8tn1EAAAAA 400 (ERROR)
Any help/thoughts/idea's would be greatly appreciated
just adding my 2 cents.
I've been doing a similar implementation in .NET MVC - I was able to get the token as you did, but the list=1 feature didn't work for me either, getting the 400 bad request code (as you have found).
The solution for me:
In the token.html response, there is a token in the div and also a GUID in the header.
To break it down:
Call token.html with uTorrent credentials
In the response content, parse the html to get the token
in the response header, there is a value with key Set-Cookie, which looks like
Set-Cookie: GUID=<guid value>
I needed to use this value (GUID=<guid value>) in all requests being sent back, as well as the token and it worked!
I'm not sure what the implementation is in PHP to do this however :)
Also quick note, I've been trying to get values through jQuery's $.getJSON and $.Ajax method without any success, because the browser (chrome) I'm using has strict guidelines on cross domain requests, and it doesn't look like uTorrent is implementing JSONP.
Hope this helps!
The 400 Error message means you are communicating with a bad request.
The MIME media type for JSON text is application/json .
use text/plain or application/json, not text/json.
application/json sometimes causes issues on Chrome, so you might want to stick with text/plain in this case.
Have you tried changing the order of the query parameters?
eg: http://localhost:8080/gui/?token=<token_uuid>&list=1
Reference: https://github.com/bittorrent/webui/wiki/TokenSystem#examples
UPDATE
I ran into a similar problem trying to create and XMPPBot for utorrent client in python.
#m.t.bennett was correct. You need to save the session information as well.
When you receive the response from token.html, capture the cookie information as well.
Usually there are 2 params: GUID and sessions. You need to put them in the header for all your subsequent requests -- List API, Getfiles API, etc.
This should fix your problem!
I'm doing a ajax call to my own server on a platform which they set prevent these ajax calls (but I need it to fetch the data from my server to display retrieved data from my server's database).
My ajax script is working , it can send the data over to my server's php script to allow it to process.
However it cannot get the processed data back as it is blocked by "Access-Control-Allow-Origin"
I have no access to that platform's source/core. so I can't remove the script that it disallowing me to do so.
(P/S I used Google Chrome's Console and found out this error)
The Ajax code as shown below:
$.ajax({
type: "GET",
url: "http://example.com/retrieve.php",
data: "id=" + id + "&url=" + url,
dataType: 'json',
cache: false,
success: function(data)
{
var friend = data[1];
var blog = data[2];
$('#user').html("<b>Friends: </b>"+friend+"<b><br> Blogs: </b>"+blog);
}
});
or is there a JSON equivalent code to the ajax script above ? I think JSON is allowed.
I hope someone could help me out.
Put this on top of retrieve.php:
header('Access-Control-Allow-Origin: *');
Note that this effectively disables CORS protection, and leaves your users exposed to attack. If you're not completely certain that you need to allow all origins, you should lock this down to a more specific origin:
header('Access-Control-Allow-Origin: https://www.example.com');
Please refer to following stack answer for better understanding of Access-Control-Allow-Origin
https://stackoverflow.com/a/10636765/413670
Warning, Chrome (and other browsers) will complain that multiple ACAO headers are set if you follow some of the other answers.
The error will be something like XMLHttpRequest cannot load ____. The 'Access-Control-Allow-Origin' header contains multiple values '____, ____, ____', but only one is allowed. Origin '____' is therefore not allowed access.
Try this:
$http_origin = $_SERVER['HTTP_ORIGIN'];
$allowed_domains = array(
'http://domain1.com',
'http://domain2.com',
);
if (in_array($http_origin, $allowed_domains))
{
header("Access-Control-Allow-Origin: $http_origin");
}
I have fixed this problem when calling a MVC3 Controller.
I added:
Response.AddHeader("Access-Control-Allow-Origin", "*");
before my
return Json(model, JsonRequestBehavior.AllowGet);
And also my $.ajax was complaining that it does not accept Content-type header in my ajax call, so I commented it out as I know its JSON being passed to the Action.
Hope that helps.
It's a really bad idea to use *, which leaves you wide open to cross site scripting. You basically want your own domain all of the time, scoped to your current SSL settings, and optionally additional domains. You also want them all to be sent as one header. The following will always authorize your own domain in the same SSL scope as the current page, and can optionally also include any number of additional domains. It will send them all as one header, and overwrite the previous one(s) if something else already sent them to avoid any chance of the browser grumbling about multiple access control headers being sent.
class CorsAccessControl
{
private $allowed = array();
/**
* Always adds your own domain with the current ssl settings.
*/
public function __construct()
{
// Add your own domain, with respect to the current SSL settings.
$this->allowed[] = 'http'
. ( ( array_key_exists( 'HTTPS', $_SERVER )
&& $_SERVER['HTTPS']
&& strtolower( $_SERVER['HTTPS'] ) !== 'off' )
? 's'
: null )
. '://' . $_SERVER['HTTP_HOST'];
}
/**
* Optionally add additional domains. Each is only added one time.
*/
public function add($domain)
{
if ( !in_array( $domain, $this->allowed )
{
$this->allowed[] = $domain;
}
/**
* Send 'em all as one header so no browsers grumble about it.
*/
public function send()
{
$domains = implode( ', ', $this->allowed );
header( 'Access-Control-Allow-Origin: ' . $domains, true ); // We want to send them all as one shot, so replace should be true here.
}
}
Usage:
$cors = new CorsAccessControl();
// If you are only authorizing your own domain:
$cors->send();
// If you are authorizing multiple domains:
foreach ($domains as $domain)
{
$cors->add($domain);
}
$cors->send();
You get the idea.
Have you tried actually adding the Access-Control-Allow-Origin header to the response sent from your server? Like, Access-Control-Allow-Origin: *?