I have some pages that I don't want users to be able to access directly.
I have this function I came up with which works:
function prevent_direct_access()
{
if($_SERVER['REQUEST_URI'] == $_SERVER['PHP_SELF'])
{
//include_once('404.php');
header("Location: 404.php");
}
}
This does exactly what I want, the URL does not change but the content does. However I am wondering if there is something I need to add to tell search engines that this is a 404 and not to index it. keep in mind I do not want the URL to change though.
Thanks!
Don’t redirect but send the 404 status code:
header($_SERVER['SERVER_PROTOCOL'].' 404 Not Found', true, 404);
exit;
for the search engines, if you return HTTP status 404 they should not index I believe. But you could always redirect to somewhere covered by a robots.txt
Just to clarify:
You have some PHP that you want available to other PHP programs on the system
You do not want anybody accessing it except by running one of the other PHP programs
(i.e. "direct" doesn't mean "except by following a link from another page on this site")
Just keep the PHP file outside the webroot. That way it won't have a URL in the first place.
To ensure Search Engines don't index it, use a header() command to send a 404 lke this;
header("HTTP/1.0 404 Not Found");
Or put all such files in one folder, "includes" say, and add a "Deny /includes/" into your robots.txt file. This way, you can also add a ".htaccess" file in the same directory with one line - "Deny From All" - this will tell Apache to block access (if apache is configured properly), for another layer of security.
Related
I moved my website and I have a QR code (which is printed in public and can't be easily replaced) that points to a specific file on my old website that has now been moved. Currently, the URL just points to a "Not found" page on my new website. I try to use javascript in the header to catch the URL and forward it to the right URL as following:
<script type="text/javascript">
if(window.location.href === "https://www.website.com/multimedia/hoerproben/1.mp3")
{
window.location.href = "https://www.webseite.com/app/download/10079133850/1.mp3";
}
</script>
But it doesn't work. Any hints what I am doing wrong?
when you open an url, the browser makes an http request to your server for that particular resource (in your example, an mp3 file).
JavaScript is not involved at all (actually, there are so called "service workers", but they are not what you're looking for, they are meant to do caching, not redirecting). The browser does not know that your JavaScript code exists and would not execute it.
What you should do is route redirecting from server, so when the browser asks from /oldlocation/file.mp3, instead the server answers with /newlocation/file.mp3
This could be in some different way according to your server. If you have no control on how your server works, what you're asking is simply not possibile.
It won't work unless you place that code in the "Not found" page that gets served. If your URL pointed to an HTML file, you could have just placed one to do the redirect. For media files you would have to configure your server to serve an HTML file instead. Don't worry about the extension, it's the Content-Type header that determines the type of the file served. Doing this, however, is not good practice because your server would still be returning a 200 response code.
It's good practice to return 301 Moved Permanently as 101arrowz pointed out in the comments. How that can be accomplished will depend on what server you're using.
Here's how that would have been accomplished with express.js:
app.get('/multimedia/hoerproben/1.mp3', function(req, res) {
res.redirect('/app/download/10079133850/1.mp3');
});
I'm making a website with some image galleries.
I want to do as little backend as possible. I have the images separated into folders. I have a PHP script that fetches the contents of a directory specified by a get param. fetchFiles.php?dir=./art. Javascript sends a fetch there and it returns a JSON array of image file names and creates images with an src of. I would like to have it so that PHP can only access things in the directory the script it's executing in, so someone can't access all the directories on the server.
fetchFiles.php
<?php
echo json_encode(
array_values(
array_diff(
scandir($_GET['dir'], SCANDIR_SORT_ASCENDING),
array('.', '..', 'fetchFiles.php')
)
)
);
?>
First of all, don't ever allow the user the ability to control input that directly affects your code. This conflating of code and user-supplied-data is precisely what leads to insecure code.
Instead of letting the user decide what directory your PHP should look in, let PHP decide what directory it should look in.
Instead of:
scandir($_GET['dir'], SCANDIR_SORT_ASCENDING);
Do this:
const PICTURES_DIR = '/path/to/pictures/';
scandir(PICTURES_DIR, SCANDIR_SORT_ASCENDING);
If you must let the user supply some part of the input the least you can do is use a Whitelisting approach rather than just opening up your entire code to all sorts of vulnerabilities.
$whiteList = ['/path1/', '/path2/', ...];
if (in_array($_GET['path'], $whiteList)) { // It's OK
} else { // Ohnoes :(
}
Now, PHP has something called a open_basedir restriction which prevents PHP from being able to go above a certain base directory, but really if you find yourself doing this just to be so lazy as to allow arbitrary user input to control your code you're already setting yourself up for failure.
Security is built in layers. It's not a silver-bullet.
I'm using jQuery to display a certain page to a user through it's .load() function. I am doing this to allow user customization to the website, allowing them to fit it to their needs.
At the moment, I am trying to display the file feed.php inside of a container within main.php;
I have come across a problem where I would like to prevent direct access to the file (i.e: going directly to the path of it (./feed.php)), but still allowing it to be served through the .load() function.
If I use the .htaccess deny from all method for this, I get a 403 on that specific part of the page. I can't find any other solution to this problem; disallowing me to achieve what I want.
This is my current (simplified) script and html:
<script type="text/javascript">
$("#dock-left-container").load("feed.php"); // load feed.php into the dock-left-container div
</script>
<div class="dock-leftside" id="dock-left-container"></div> // dock-left-container div
If anyone could suggest a solution through .htaccess, php, or even a completely different way to do this, I'd be very grateful!
Thanks in advance.
Please follow below steps to achieve:
In the .load function of jquery post a security code.
In the Feed.php page place a PHP condition if the posted security_code params found and match with security_code passed in the .load then only allow to access the page otherwise restrict.
Please follow below changes in your existing code to achieve it.
JS
<?php
$_SESSION['security_code'] = randomCode();
?>
<script type="text/javascript">
$("#dock-left-container").load("feed.php", {
security_code: '<?= $_SESSION['security_code']; ?>'
}); // load feed.php into the dock-left-container div
</script>
PHP
Place php condition in the top of feed.php
if(isset($_POST['security_code']) && $_POST['security_code'] == $_SESSION['security_code']){
//Feed.php page's all the stuff will go here
}else{
echo "No direct access of this page will be allowed.";
}
feed.php:
if (isset($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest') {
readfile('myfeed.xml');
} else {
header('HTTP/1.0 403 Forbidden');
}
jQuery sends a HTTP_X_REQUESTED_WITH header by default. This is not, by far, anything remotely secure since HTTP headers are easily sent/spoofed. But it will stop the occasional user trying to access the feed directly.
You can, additionaly, check the $_SERVER['HTTP_REFERER'] header (but, again, this is easily spoofed) and, ofcourse, use your normal session logic to make sure the user is logged on if that's a requirement to access the feed.
Either way: there's no way to make this 'water tight'. If your browser can (should be able to) access the feed in some way then it's simply a matter of opening the debugger, having a look at the actual request sent in the network tab and sending the exact same headers/request to get to the file from, say, Curl. Actually, you will see the response of the request (i.e. the actual feed) in the debugger as well.
Repeat after me: if my (or a user's) browser can access the feed 'from jQuery' (via an AJAX request or whatever) then the feed is accessible to that user if he's even just a little bit more persistent than giving up immediately. Only using a session will keep out 'unauthorized' users because it relies on being logged in. After having logged in the request is visible no matter what and that request can be 'forged' to be sent from any other application no matter what.
I'm hosting a webpage on github (flickrTest.html) and I'm trying to check the existence of a folder in the same directory as the webpage. The hosted folder looks like this:
http://imgur.com/a/pZWoH
I try to use an ajax call like this:
$.ajax({
url: 'mapData',
error: function() {
//Ddirectory doesn't exist
console.log("ERROR: expected directory named 'mapData'. Exiting...");
return
},
success: function() {
//Directory does exist
console.log("mapData exists..");
...
but I'm getting a mixed-content error because this call is considered http and the site my webpage is being hosted on is https. Somehow I'm able to access mapData's JSON files if I include the absolute path. Is there a way to check for the existence of a folder over https?
First, there is a fundamental problem with your approach. As others have stated in comments, you cannot check if a folder has contents (or exists) with a simple http (or https) web request since the web server will expect to respond with HTML that can be presented as information for a user's browser. You can create a handler or script that will process a directory request and map that functionality using something like a .htaccess rule or other rewrite system depending on what platform you are on. The reason why I've identified this as a "problem" and have not gone as far to say it's impossible, is because you can (as it appears you are attempting) parse that response into something usable. That said, I think it's beside the point and not the nature of the error you are actually getting.
The error you are experiencing is coming from loading the current page you are on in HTTPS and the ajax request you are making is over HTTP (as indicated by the error message). This message can be misleading in your case since it's not that the request URL has not been identified as HTTPS, it's because the browser doesn't trust that the URL is a Web request to either a file or a folder. You can correct this by simply adding a trailing slash to the folder:
$.ajax({
url: 'mapData/',
error: function() {
//Ddirectory doesn't exist
console.log("ERROR: expected directory named 'mapData'. Exiting...");
return
},
success: function() {
//Directory does exist
console.log("mapData exists..");
...
You have now resolved the issue of completing the web request, but you are faced with a problem that is mentioned in the first part. The server will return a 404 error because that is how github.io is configured to respond to empty (or non-existent) directory requests. You will need some type of server-side handler to process this request, or you will need to come up with something else creative such as putting an index.html in that folder so that your JavaScript can parse the result. For instance, you could drop an index.html in the folder and if the server returns 200, then you know the folder exists, but if it returns 404 then you can assume the folder does not exist.
In case it's not already known, web servers are designed to make it so that a browser is limited in reverse-engineering it's content. While servers can be configured to return directory contents, by default, most are going to protect folders so that remote users cannot browse the server without some type of elevated permissions/authentication. Essentially, the reason why this requires a a more customized server-side approach is not because there is something wrong with the front-end code, it's because the web server is designed to not allow this type of thing without the server being configured to allow it due to security.
I just removed # tag from my url of angular single page app.
I did like.
$locationProvider.html5Mode(true);
And It worked fine.
My problem is when I directly enter any url to the browser it showing a 404 error. And its working fine when I traverse throughout the app through links.
Eg: www.example.com/search
www.example.com/search_result
www.example.com/project_detail?pid=19
All these url's are working fine. But when I directly enter any of the above url's into my browser it showing a 404 error.
Please any thoughts on it.
Thanks in advance.
Well i had a similar problem. The server side implementation included Spring in my case.
Routing on client side ensures that all the url changes are resolved on the client side. However, When you directly enter any such url in the browser, the browser actually goes to the server for retrieving a web page corresponding to the url.
Now in your case, since these are VIRTUAL urls, that are meaningful on the client side, the server throws 404.
You can capture page not found exception at your server side
implementation, and redirect to the default page [route] in your app.
In Spring, we do have handlers for page not found exceptions, so i
guess they'll be available for your server side implementation too.
When using the History API you are saying:
"Here is a new URL. The other JavaScript I have just run has transformed the page into the page you would have got by visiting that URL."
This requires that you write server side code that will build the page in that state for the other URLs. This isn't a trivial thing to do and will usually require a significant amount of work.
However, in exchange for that work you get robustness and performance. When one of those URLs is visited it will:
work even if the JS fails for any reason (such as a dropped network connection or a client (such as a search engine) that doesn't support JS)
load faster than loading the homepage and then transforming it with JS
You need to use rewrite rules. Angular is an single page app, so all your request should go to the same file(index.html). You could do this by creating an .htaccess.
Assuming your main page is index.html.
Something like this (not tested):
RewriteRule ^(.)*$ / [L,QSA]
L flag means that if the rule matches, don't execute the next RewriteRule.
QSA means that the URL query parameters are also passed with the rewrited url.
More info about htaccess: http://httpd.apache.org/docs/2.2/howto/htaccess.html