I have a site that sends XMLHTTPRequests to a php file that handles the HTTP POST Request and returns data in JSON format. The urls for the post_requests files are public information (since a user can just view the JS code for a page and find the URLs I'm sending HTTP requests to)
I mainly handle HTTP Post Requests in PHP by doing this:
//First verify XMLHTTPRequest, then get the post data
if (isset($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) === 'xmlhttprequest')
{
$request = file_get_contents('php://input');
$data = json_decode($request);
//Do stuff with the data
}
Unfortunately, I'm fairly sure that the headers can be spoofed and some devious user or click bot can just spam my post requests, repeatedly querying my database until either my site goes down or they go down fighting.
I'm not sure if their requests will play a HUGE role in the freezing the server with their requests (as 20 requests per second isn't that much). Should I be doing something about this? (especially in the case of a DDOS attack). I've heard of rate-limiting where you record an instance of every time some IP requests data and then trace if they are spammy in nature:
INSERT INTO logs (ip_address, page, date) values ('$ip', '$page', NOW())
//And then every time someone loads the php post request, check to see if they loaded the same one in the past second or 10 seconds
But that means every time there's a request by a normal user, I have to expend resources to log them. Is there a standard or better "practice" (maybe some server configuration?) for preventing or dealing with his concern?
Edit: Just for clarification. I'm referring to some person coding a software (with a cookie or is logged in) that just sends millions of requests per second to all my PHP post request files on my site.
The solution for this is to rate-limit requests, usually per client IP.
Most webservers have modules which can do this, so use one of them - that way your application only receives requests it's suppsed to handle.
nginx: ngx_http_limit_req
Apache: mod_evasive
There are many things you can do:
Use tokens to authenticate request. Save token in session and allow only some amount of requests per token (eg. 20). Also make tokens expire after some amount of time (eg. 5 min). The exact values depend on your site usage patterns. This of course will not stop attacker, as he may refresh the site and grab new token, but it is a small and almost costless aggravation.
Once you have tokens, require captcha after several token refresh requests. Also adjust it to your usage patterns to avoid displaying captcha to regular users.
Adjust your server's firewall rules. Use iptables connlimit and recent modules (see http://ipset.netfilter.org/iptables-extensions.man.html). This will reduce request ratio handled by your http server, so it will be harder to exhaust resources.
Related
I have created an API which takes the hostkey or API_KEY and then it validates and gives back JWT token. Everything is working fine, I can't access the restricted routes without Hostkey.
ISSUE
The major issue is that what will happen if someone gives this hostkey to others as it will no longer be protected or it will be misused. So what I want to do is not only validate the hostkey but also validate the domain from which request came from. It is kind of paid service and I really want to restrict is to specific domains. Just like google does with MAP Api as if we add that map key to other domain it throws an error.
The only way to do this is to check the origin and referrer headers.
Unfortunately, server to server this can't be done reliably as the referrer and origin headers would be set by the coder and so can be spoofed easily. For server to server calls you would be better off whitelisting IP addresses that are allowed to make calls to your APIS. In this case use something like How to get Real IP from Visitor? to get the real IP of the server and verify it against whitelisted IPs.
Assuming this is a JS call in browser and not server to server, and that you trust the browser, the only way this can really be done is by verifying the referrer and origin headers. This can still be spoofed with a browser plugin or even with a tool like Postman so I don't recommend it for high security. Here is a PHP example for verifying the origin or referrer.
$origin_url = $_SERVER['HTTP_ORIGIN'] ?? $_SERVER['HTTP_REFERER'];
$allowed_origins = ['example.com', 'gagh.biz']; // replace with query for domains.
$request_host = parse_url($origin_url, PHP_URL_HOST);
$host_domain = implode('.', array_slice(explode('.', $request_host), -2));
if (! in_array($host_domain, $allowed_origins, false)) {
header('HTTP/1.0 403 Forbidden');
die('You are not allowed to access this.');
}
Optionally also CORS headers are good as commented by #ADyson Cross-Origin Request Headers(CORS) with PHP headers
I would like to suggest making a quote or limit for the number of request, so when the paid API reach for 100 request the key will stop working, then the person who paid will not give the key for others. This is not perfect solution, but I would suggest it cause most API services uses it.
There is a system that sends POST requests from frontend to backend. These POST requests do not use the body to pass the data to the server; instead, it uses query strings in the URL params.
These requests do not send files or JSON, only several string params.
W3C does not describe that situation https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Is it a bad practice to use query strings for POST requests, and if there any negative consequences of using that from security or performance or architecture reasons?
Are there any conventions that define the usage of body or query strings for different types of requests?
Reminder: In 2014, RFC2616 was replaced by multiple RFCs (7230-7237).
Is using a query string in POST request a bad practice?
Not if you know what you are doing.
Mechanically, it is all fine: are we allowed to use POST with a target-uri that includes a query-part? Yes. Are we allowed to use POST with an empty request body? Yes. Are we allowed to do both of those things at the same time? Yes.
The hard part: will this POST request invalidate the correct representations from the cache?
Cache-invalidation happens when the server returns a non-error response to an unsafe request (POST is an unsafe request method). The representations that are invalidated are those that match the target-uri of the unsafe request.
GET /foo?a=b HTTP/2.0
POST /foo?a=b HTTP/2.0
Here, if the POST is successful, the representations cached after the successful GET request will be invalidated in the cache.
GET /foo HTTP/2.0
POST /foo?a=b HTTP/2.0
Here, the effective request-uri is not the same, which means that general purpose components won't invalidate the cached representations of /foo.
There's nothing wrong with using query parameters in a URL in a POST request, with or without a request body. If it makes semantic sense for your request, it's fine. The POST method in itself has a semantic meaning distinct from GET, it doesn't require a request body to be useful, and the URL is yet distinct from that again. A classic example might be:
POST /foo/bar?token=83q2fn2093c8jm203
I.e., passing some sort of token through the URL.
There's no general security problem here, since anyone who could intercept this POST request to read the URL could also read its body data; you'll hardly find an attacker in a position that allows them to read the URL but not the body. However, URLs are typically logged in server access logs and browser histories, while request bodies aren't; that may or may not be worth considering, depending on what information you're transporting in those parameters and who has access to those logs.
I'm trying to work out my mobile data usage & I noticed there are simple APIs to query on https://secure.example.com/myaccountmgr/fapi/usage/data/... but they carry an Authorization: 0a60bd4e0blahlblahhash in order to query them.
I copied the curl commands when logging in, but it does a fairly complex redirection dance that I not sure how to get curl to compute since it's Javascript.
How do people do this? After further debugging I noticed that it later does a AJAX request via Angular to https://secure.example.com/myaccountmgr/fapi/login/esso where it returns a utoken which is used in the subsequent AJAX requests as the Authorization value. So what I am asking, is how do I view just this response once I log in, so I can grab the token?
On my site I have an auto-suggest text input that suggests results as the user types. The results are provided by a AJAX calls to an API on a different domain. This means I have to use CORS to allow the requests.
It is all working quite well, but every time the user types a new character, the browser sends a new OPTIONS request to ensure it is authorized.
Is there a way around all these repeated options requests?
My php script receiving the requests has
header("Access-Control-Allow-Origin: http://consent.example.com");
and the requests are all originating from consent.example.com. To be clear, the authorization works just fine, and the request completes successfully, but I don't know why it needs to keep making options calls. It would make sense to me that the browser would cache this.
According to RFC 2616 ("Hypertext Transfer Protocol -- HTTP/1.1"), section 9.2:
9.2 OPTIONS
...
Responses to this method are not cacheable.
The HTTP spec explicitly disallows caching OPTIONS responses.
It is worth noting that the GET responses do not employ caching either (I see that customers?search=alex is 200 each time). This is simply because the server chooses not to send 304 responses for that request, or your browser doesn't let the server know it has a cached copy, by an If-Modified-Since or If-None-Match request header.
Imagine situation, I've ajax.php file that displays specific information based on ajax request.
How can I block all requests going to ajax.php file except coming via ajax?
I'm looking for something like this in php:
if ($ajax) {
//Do soemthing
}
Will this guarantee that malicious user won't be able to see what ajax.php has to display? Since ajax has same origin policy, request must originate from the same domain, so in theory nobody will be able to call my ajax.php?
There is no way to reliably tell whether a request is an Ajax request or not, ever. Any client side information (like the referer) can be spoofed and you can not trust any of it.
You secure Ajax requests like any other request - usually through a session-based login system that checks whether the requesting client is logged in, and what they are allowed to see.
Other answers already mentioned it: there's no reliable way to determine if a script was called via an AJAX request. But I use this code to detect AJAX request:
define('IS_AJAX', isset($_SERVER['HTTP_X_REQUESTED_WITH']) && $_SERVER['HTTP_X_REQUESTED_WITH'] === 'XMLHttpRequest');
Keep in mind that it can be spoofed, so don't depend on it.
What am doing to secure our ajax requests - Whenever any user logins at that time generate a token for the user e.g get the micro time and then convert into some hash, then attach this token with that user.