I have an AJAX function that makes call to a page on my website.
$(document).on('click', thisIdentity, function() {
var trigger = $(this);
var items = trigger.attr('data-values').split('_');
$.ajax({
type: "POST",
url: "/mod/mypage.php",
data : { pid : item[0], uid : item[1] },
dataType: "json",
success: function(data) {
if(data.job == 1) {
// do something
}
}
});
});
Now this works fine and do as intended. However, if I use any third-party app like POSTMAN and make a POST request to www.xyz.com/mod/mypage.php with parameters pid : 1 and uid : 2. It still goes through and make changes to my database.
Is there anyway to check that request is generated from my
domain/server only?
How to stop such POST requests outside from my domain?
One thing I thought was to generate a token and set in SESSION before this request and check in mypage.php that if token is set or not. Is this a feasible way?
This is exactly what a CSRF token is for. Users must navigate to the page first, which generates a token to submit, ergo without navigating to the page will render any POST requests invalid.
However, trying to stop someone from POST'ing a request to your endpoint from a utility like POSTman is an exercise in futility. You must authenticate every request to the endpoint, in this case just check the photo id is owned by the submitting client.
OWASP provides a decent description of what a CSRF is:
Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they're currently authenticated. CSRF attacks specifically target state-changing requests, not theft of data, since the attacker has no way to see the response to the forged request.
Example validation flow
Login.php
<?php
// Establish DB connection, validate
$_SESSION['id'] = $db->getUserId();
$_SESSION['admin'] = $db->getAdminStatus();
Delete.php
<?php
if (!$db->isPhotoOwner($_POST['pid'])) {
exit;
}
// Delete photo flow
Admin.php
<?php
if (!$_SESSION['admin']) {
die("Not admin.");
}
// Do admin action or whatever
You could have the calling page identify itself with $_SERVER['SCRIPT_NAME'] and write that value to a hidden input field or $_POST and check for it at the beginning of processing. Any confirmed value you choose might work.
If they are already imitating the data of your JSON, then maybe drop it into the javascript with PHP dynamically writing the code value on page serving.
Related
I implemented reCaptcha. After the "I am not a robot" checkbox is clicked, a token is getting generated from google.
Client side (js)
function checkCaptchaAndSubscribe(thisContext)
{
var captchaResponse = grecaptcha.getResponse();
if (captchaResponse == "") {
$captchaRequired.css('display', 'block');
return false;
}
grecaptcha.reset();
$captchaRequired.css('display', 'none');
jQuery.ajax({
url: "/black_newsletter2go/index/verify",
method: "POST",
async: "true",
data: {
recaptchaResponse: captchaResponse
},
success: function(response) {
$statusContainer.show();
if (response != "success") {
$status.html("<h2 class='nl2go_h2'>Die Captcha Validierung ist fehlgeschlagen!</h2>");
return false;
}
subscribe(thisContext);
}
});
}
I send the token to my server by using ajax and validate it there like this:
Server side (php):
public function verifyAction()
{
$captchaResponse = $this->getRequest()->getParam('recaptchaResponse');
if (!isset($captchaResponse) || empty($captchaResponse)) {
return "captcha response is empty";
}
$secretKey = Mage::Helper("recaptcha")->getSecretKey();
$url = 'https://www.google.com/recaptcha/api/siteverify';
$data = array(
'secret' => $secretKey,
'response' => $captchaResponse,
);
// use key 'http' even if you send the request to https://...
$options = array(
'http' => array(
'header' => "Content-type: application/x-www-form-urlencoded\r\n",
'method' => 'POST',
'content' => http_build_query($data)
)
);
$context = stream_context_create($options);
$result = file_get_contents($url, false, $context);
$result = json_decode($result);
//var_dump($result);
//exit();
if ($result->success
&& (strpos(Mage::getBaseUrl(), $result->hostname) !== false)) {
echo "success";
} else {
echo "fail";
}
}
This is the output of the object $result
It returns success if the checks were successfull, otherwise fail.
But is this enough? What if the attacker uses a HTTP proxy like burpsuite to change the response to success? Then he can bypass my checks and always get through? Or am I wrong?
It uses a key pair to encrypt/decrypt the info. So it's sending the info encrypted. That's why it can't be tempered with, but of course, that means you must make sure to not get the private key stolen.
There the server knows and saves the state in its storage so if the client tries to use it as "success" when it was "fail", the server will know, no matter what. So for the hacker to change the value for the client is not likely to do much, it will depend on your code, of course. If you are using that reCAPTCHA to log the user in, then obviously that login attempt will fail on the server side if the reCAPTCHA returned "fail". So whether the client is told "success" or not, it still won't be logged in. The client should never be the keeper of such a state since it can't be trusted (it can always have tainted data.)
It works in a way similar to what you'd do between a browser and a server using HTTPS.
The communication between your client and your server should also be on HTTPS to avoid some easier man in the middle (MITM) problems. However, it is always possible to have someone who becomes a proxy, which is how most MITM work, and in that case, whatever you're doing can be changed by the MITM.
The one thing that the MITM can't do is create a valid certificate for the final destination, however. In that sense, there is a protection, but many people don't verify certificates each time they connect to a website. One technique, though, has been for MITM to not give you HTTPS, only him and your server would use HTTPS and the client would remain on HTTP. Although your code could detect such, obviously the MITM can also change that code. Similarly, having a cookie set with Http-Only and Secure can enhance the security, but that too can be intercepted by a MITM.
Since the MITM can completely change your scripts, there is pretty much nothing you can do on the client's side that would help detect such a problem and on the server side, you will receive hits that look like what the client sent to you. So again, no real way to detect a MITM.
There is a post that was asking that very question: could I detect an MITM from the server side? It's not impossible, but it's rather tricky. There are solutions being put in place by new implementations/extensions to the normal HTTP solution, but those require an additional application that connects to a different system and there is no reason why such could not also be proxied by a MITM once enough people use such solutions.
The result comes from an URL that is owned by Google.
If user tampered with what is being send to your php script - then the google webservice will return a failure and you won't pass such request through.
Problem
I have a page which is generated entirely through JavaScript. I grab the content by requesting data from a PHP script on a subdomain (ajx.example.com), then return it in JSON format.
One of the requirements for this particular page is to be "editable" if a user is logged in (which is one of the keys in the JSON, "isEditable":true). If I visit the request page (on the subdomain) directly, and the user is logged in on (on the main domain), isEditable is always true. However, if I request it via an Ajax request, it's always false.
These subdomains are done through a VirtualHost on MAMP, and all point to the same directory.
www.example.com is in htdocs/example,
ajx.example.com is in htdocs/example/ajax, and
v1.examplecdn.com is in htdocs/example/cdn.
Code
Here is the init page (www.example.com/app/init.php:
ini_set("session.cookie_domain", ".example.com"); // make sure all sessions are available on all subdomains
error_reporting(E_ALL);
session_start();
// I include the user class here
Here is the request page (ajx.example.com/request.php):
require_once "../app/init.php"; // (/htdocs/example/app/init.php)
header("Content-type: application/json;charset=utf-8", false);
header("Access-Control-Allow-Origin: http://www.example.com", false);
$user = new User();
$editable = false;
if($user->loggedIn()){ // check if user is logged in (this is stored in a session on .example.com
$editable = true;
}
die(json_encode(array("isEditable" => $editable)));
And here is the request Ajax (v1.examplecdn.com/request.js):
var container = document.getElementById("container");
ajax({
url: "//ajx.example.com/request.php", // (/htdocs/example/ajax/request.php)
dataType: "json",
success: function(res){
if(res.isEditable){
console.log("editable"); // this doesn't come through as isEditable is false.
}
}
});
Request
If anyone can point me into the direction of how to make it so that those PHP Sessions can be accessed via those subdomains, it would be greatly appreciated!
Cheers.
You are right until this setting -
ini_set("session.cookie_domain", ".example.com"); // make sure all sessions are available on all subdomains
But as you have different domains on different VM's, they cannot share the session, as each of the VM will create its own copy of new session, to allow session sharing you need to save sessions either in DB or in cache service , like memcache, redis etc.
saving sessions in db table has been explained here
How do I save session data to a database instead of in the file system?
before you mark this as duplicate please read through as i have gone through lots of stack overflow questions but could not find a suitable solution.
So the problem i am facing is i am new to django and learnt about CSRF protection for POST requests. I have successfully implemented these calls on a non-ajax based page. But the current project that i am working on is a one page application. So all the calls are through ajax and in vanila JS not using any library. The problem that i am facing with is that for the first request i get the valid CSRF token that i generated in the template. But after the first ajax call the CSRF token changes. So i want to know what is the right method in django for a situation like this. Should i make all request respond with CSRF token somehow and save them in a JS variable?
Also currently there are two pages. The first is a simple login template which has no ajax calls . It posts to the home page with credentials and if valid it is done. But inside the home there are multiple forms. And submitting any one of them changes the token so how do i handle a situation like this.
PS: i prefer the codes in pure JS not jquery or any other framework and would not want to disable to csrf protection.
I already though of storing the CSRF token in cookie or session variable by that would be defeating the whole purpose of token.
Please if you could attach a sample code that i can learn from.
The problem that i am facing with is that for the first request i get
the valid CSRF token that i generated in the template. But after the
first ajax call the CSRF token changes.
I don't believe this is true. Otherwise, any django application would stop working if the user opens more than one tab.
Here's the solution I use on my webapps. It works as expected, but I didn't follow the official recommendation of getting the token value from the cookie. Why? Less code.
myapp/templatetags/csrf_ajax.html
from django import template
register = template.Library()
#register.inclusion_tag('myapp/_csrf_ajax.html')
def csrf_ajax():
# https://docs.djangoproject.com/en/1.8/ref/csrf/#ajax
return {}
_csrf_ajax.html
<script>
(function () {
var csrf_token = "{{ csrf_token }}";
function csrfSafeMethod(method) {
return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method));
}
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrf_token);
}
}
});
})();
</script>
Then I use this templatetag in each page that I want the CSRF ajax setup.
{% csrf_ajax %}
The csrttoken is stored in cookies and it doesn't change with every ajax request. I have this code in a js file and add it to any page that needs to send ajax post requests (it uses jQuery and cookie, you need to translate it to plain JavaScript if you don't want to use external libraries):
var csrftoken = $.cookie('csrftoken');
function csrfSafeMethod(method) {
return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method));
}
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
}
});
I have a litle shop basket where I can add products.
Here is my index.php
...<script type="text/javascript" src="function.js"></script>
<a title="Add to basket" onclick="add_product_to_cart('apple','1');" href="#">Apple</a><br>...
Here is the functions.js
function add_product_to_cart(item, id) {
var item = item;
var id = id;
$.ajax({
url: "ajax.php",
type: "POST",
data: {
action: "add",
name: item,
id: id
},
success: function(data) {
//do something
}
});
};
When I click on "Apple", the parameters are send to the ajax, and they are visible for example in the Firefox-Web-Developer.
Is there a chance to hide these POST-parameters? Maybe to protect it from attacs from outside? Is my thinking maybe totally wrong how to add it to the basket? Thanks for any help!!
Here is a screenhot from my web-developer.
Web-Developer Firefox
You can't hide what is being logged in the Network tab of Chrome Developer Tools. Even if you could, a hacker could sniff the requests using Fiddler or other web proxy. Client side validation is nice, but not the end all. Most people wouldn't be trying to send requests to your server illegitimately but some will I suppose.
You really should be doing server side validation that data sent to the server is indeed valid. Don't rely on the client to do this as anyone can modify what is sent directly to the server. In your PHP code, you would do something like this:
function validate_data($data)
{
// other code here
if(!is_discontinued($data['product_id']))
add_to_cart($data['product_id']);
// other code after
}
function is_discontinued($product_id)
{
// do database query
$is_discontinued = lookup_product($product_id);
return $is_discontinued;
}
This is very barebones, but it should give you the idea of what needs to be done.
EDIT:
After looking at some of your recent comments, you may also like to include CSRF tokens to make sure that requests originate from your domain. These tokens are generated on the server and often stored in hidden fields in the form to be sent back to the server with each request. Then you validate the token on the server and after it passes validation, you perform your action.
Note, this will only slow down most hackers, but it can deter some who aren't dead set on performing illegitimate requests.
In terms of sending the value with AJAX requests, you would need to select your hidden field and add its value to the POST data. Your AJAX request would then look something like this:
function add_product_to_cart(item, id) {
var item = item;
var id = id;
$.ajax({
url: "ajax.php",
type: "POST",
data: {
action: "add",
name: item,
id: id,
token: $('#csrf_token').val()
},
success: function(data) {
//do something
}
});
};
On the server (PHP), you would have something like this:
function get_csrf_token()
{
$token = md5(uniqid(rand(), TRUE));
if (!isset($_SESSION['token'])) {
$_SESSION['token'] = $token;
}
else
{
$token = $_SESSION['token'];
}
return $token;
}
function valid_csrf_token()
{
if(isset($_POST['token'])){
if($_POST['token'] == $_SESSION['token'])
return true;
else
return false;
}
else {
return false; // no token was sent with the request
}
}
Then in your form, you would have your hidden field like this:
<input id="csrf_token" type="hidden" value="<?php get_csrf_token(); ?>" />
Finally, your original PHP validation function would include the CSRF token validation:
function validate_data($data)
{
// other code here
if(!is_discontinued($data['product_id']) && valid_csrf_token())
add_to_cart($data['product_id']);
else
header('HTTP/1.1 400 Bad Request', true, 400); // set status to bad request
// other code after
}
Note, setting the status to bad request is optional, but it will show the request was not as expected.
The answer is: No, there is no way you can hide the data you're sending to server using AJAX.
But this shouldn't be a problem, since you MUST validate everything on server.
You can validate things on client-side (for normal users) to have a easier/faster response on client, and to get less traffic on your server. But, as said above, you must revalidate everything on server, cause this is the only way you can ensure that, even if malicious data is sent to your server, your website will still work as expected.
Btw, you can even block/ban the users that are trying to make something different from what your not-modified client code usually does.
I have a web site that is trying to call an MVC controller action on another web site. These sites are both setup as relying party trusts in AD FS 2.0. Everything authenticates and works fine when opening pages in the browser window between the two sites. However, when trying to call a controller action from JavaScript using the jQuery AJAX method it always fails. Here is a code snippet of what I'm trying to do...
$.ajax({
url: "relyingPartySite/Controller/Action",
data: { foobar },
dataType: "json",
type: "POST",
async: false,
cache: false,
success: function (data) {
// do something here
},
error: function (data, status) {
alert(status);
}
});
The issue is that AD FS uses JavaScript to post a hidden html form to the relying party.
When tracing with Fiddler I can see it get to the AD FS site and return this html form which should post and redirect to the controller action authenticated. The problem is this form is coming back as the result of the ajax request and obviously going to fail with a parser error since the ajax request expects json from the controller action. It seems like this would be a common scenario, so what is the proper way to communicate with AD FS from AJAX and handle this redirection?
You have two options.
More info here.
The first is to share a session cookie between an entry application (one that is HTML based) and your API solutions. You configure both applications to use the same WIF cookie. This only works if both applications are on the same root domain.
See the above post or this stackoverflow question.
The other option is to disable the passiveRedirect for AJAX requests (as Gutek's answer). This will return a http status code of 401 which you can handle in Javascript.
When you detect the 401, you load a dummy page (or a "Authenticating" dialog which could double as a login dialog if credentials need to be given again) in an iFrame. When the iFrame has completed you then attempt the call again. This time the session cookie will be present on the call and it should succeed.
//Requires Jquery 1.9+
var webAPIHtmlPage = "http://webapi.somedomain/preauth.html"
function authenticate() {
return $.Deferred(function (d) {
//Potentially could make this into a little popup layer
//that shows we are authenticating, and allows for re-authentication if needed
var iFrame = $("<iframe></iframe>");
iFrame.hide();
iFrame.appendTo("body");
iFrame.attr('src', webAPIHtmlPage);
iFrame.load(function () {
iFrame.remove();
d.resolve();
});
});
};
function makeCall() {
return $.getJSON(uri)
.then(function(data) {
return $.Deferred(function(d) { d.resolve(data); });
},
function(error) {
if (error.status == 401) {
//Authenticating,
//TODO:should add a check to prevnet infinite loop
return authenticate().then(function() {
//Making the call again
return makeCall();
});
} else {
return $.Deferred(function(d) {
d.reject(error);
});
}
});
}
If you do not want to receive HTML with the link you can handle AuthorizationFailed on WSFederationAuthenticationModule and set RedirectToIdentityProvider to false on Ajax calls only.
for example:
FederatedAuthentication.WSFederationAuthenticationModule.AuthorizationFailed += (sender, e) =>
{
if (Context.Request.RequestContext.HttpContext.Request.IsAjaxRequest())
{
e.RedirectToIdentityProvider = false;
}
};
This with Authorize attribute will return you status code 401 and if you want to have something different, then you can implement own Authorize attribute and write special code on Ajax Request.
In the project which I currently work with, we had the same issue with SAML token expiration on the clientside and causing issues with ajax calls. In our particular case we needed all requests to be enqueud after the first 401 is encountered and after successful authentication all of them could be resent. The authentication uses the iframe solution suggested by Adam Mills, but also goes a little further in case user credentials need to be entered, which is done by displaying a dialog informing the user to login on an external view (since ADFS does not allow displaying login page in an iframe atleast not default configuration) during which waiting request are waiting to be finished but the user needs to login on from an external page. The waiting requests can also be rejected if user chooses to Cancel and in those cases jquery error will be called for each request.
Here's a link to a gist with the example code:
https://gist.github.com/kavhad/bb0d8e4a446496a6c05a
Note my code is based on usage of jquery for handling all ajax request. If your ajax request are being handled by vanilla javascript, other libraries or frameworks then you can perhaps find some inspiration in this example. The usage of jquery ui is only because of the dialog and stands for a small portion of the code which could easly be swapped out.
Update
Sorry I changed my github account name and that's why link did not work. It should work now.
First of all you say you are trying to make an ajax call to another website, does your call conforms to same origin policy of web browsers? If it does then you are expecting html as a response from your server, changedatatype of the ajax call to dataType: "html", then insert the form into your DOM.
Perhaps the 2 first posts of this serie will help you. They consider ADFS and AJAX requests
What I think I would try to do is to see why the authentication cookies are not transmitted through ajax, and find a mean to send them with my request. Or wrap the ajax call in a function that pre authenticate by retrieving the html form, appending it hidden to the DOM, submitting it (it will hopefully set the good cookies) then send the appropriate request you wanted to send originally
You can do only this type of datatype
"xml": Treat the response as an XML document that can be processed via jQuery.
"html": Treat the response as HTML (plain text); included script tags are evaluated.
"script": Evaluates the response as JavaScript and evaluates it.
"json": Evaluates the response as JSON and sends a JavaScript Object to the success callback.
If you can see in your fiddler that is returning only html then change your data type to html or if that only a script code then you can use script.
You should create a file anyname like json.php and then put the connection to the relayparty website this should works
$.ajax({
url: "json.php",
data: { foobar },
dataType: "json",
type: "POST",
async: false,
cache: false,
success: function (data) {
// do something here
},
error: function (data, status) {
alert(status);
}
});