Reading directory contents from a client's computer - PHP - javascript

I want to recursively read contents of a folder chosen by client on my site.
I have used opendir() and scandir() but they are unable to read directory contents from client's computer.
Is there any way that I can read the file names from visitor's directory.
function ListIn($dir, $prefix = '') {
$dir = rtrim($dir, '\\/');
$result = array();
$directory = opendir($dir);
foreach (scandir($directory) as $f) {
if ($f !== '.' and $f !== '..') {
if (is_dir("$dir/$f")) {
$result = array_merge($result, ListIn("$dir/$f", "$prefix$f/"));
} else {
$result[] = $prefix.$f;
}
}
}
return $result;
}
I need to implement this in either php or javascript.

This is possible with the storage apis provided by javascript nowadays.
http://www.html5rocks.com/en/tutorials/file/dndfiles/
However if you need raw read/write access, I suggest you read up on the chrome platform apis.

You cannot do this with PHP or any other server side technology.
You might be able to do this with a browser plugin or a flash app.
Ask yourself why you want to do this?

Related

Workaround to XHR request and download prohibition

I have a weird situation:
I get data from a Postgres database, and from that data, I create a table in a website, using Grid.js. Each line has a "Download" button, that takes 2 arguments from that table entry and send them to a function. Originally, that function would make a XHR request to a php file, that gets files from another DB, creates a ZIP file, and should send it to the user, with readfile().
I now discovered that this is not possible. XHR does not allow downloads for safety reasons.
I could do something using window.location to call the PHP file, and get the download, but I am dealing with hundreds of files, so I cannot write hundreds of PHP files to get the data individually. I could, but it would be very hard to maintain and manage all those files, and it feels unprofessional.
Right now, I can:
Send the data from JS to PHP, using POST;
Using those two variables, fetch the data from another Postgres server;
Use those files and create a ZIP file (The ZIP files cannot be stored permanently in the server, because of storage restrictions. A cronjob in the server will clean the files eventually)
I need to:
Send the ZIP to to the user;
Maintain the simplest code possible, in a way that I can feed 2 variables and it just works, without needing a PHP file for each table line (if that makes sense)
The current code is:
Javascript
const getData = (schema, table) => {
const xhr = new XMLHttpRequest();
xhr.open('POST', 'php/get_data.php', true);
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8');
let packg = {
schema: schema,
table: table
};
const packgJSON = JSON.stringify(packg);
// Vanilla JS XHR requires this formatting to send the data
const data = new URLSearchParams({'content': packgJSON});
xhr.send(data);
};
PHP
<?php
// File with connection info
$config = include('config.php');
// Connection info
$host = $config['host'];
$port = $config['port'];
$database = $config['database'];
$username = $config['username'];
$password = $config['password'];
// POST reception
$packg = json_decode($_POST['content']);
$schema = $packg->schema;
$table = $packg->table;
// File info and paths
$filename = $table;
$rootDir = "tempData/";
$fileDir = $filename . "/";
$filePath = $rootDir . $fileDir;
$completeFilePath = $filePath . $filename;
$shpfile = $filename . ".shp";
$zipped = $filename . ".zip";
$zipFile = $completeFilePath . ".zip";
// Function to send the file (PROBLEM - THIS DOES NOT WORK WITH XHR)
function sendZip($zipFile) {
$zipName = basename($zipFile);
header("Content-Type: application/zip");
header("Content-Disposition: attachment; filename=$zipName");
header("Content-Length: " . filesize($zipFile));
ob_flush();
flush();
readfile($zipFile);
};
// Send the zip if it's already available (NOT PROBLEMATIC)
if (file_exists($zipFile)) {
sendZip($zipFile);
};
// Get shapefile, zip it and send it, if not available (NOT PROBLEMATIC)
if (!file_exists($zipFile)) {
// Get files
exec("mkdir -p -m 777 $filePath");
exec("pgsql2shp -h $host -p $port -u $username -P $password -g geom -k -f $completeFilePath $database $schema.$table");
// ZIP the files
$zip = new ZipArchive;
if ($zip -> open($zipFile, ZipArchive::CREATE) === TRUE) {
$handlerDir = opendir($filePath);
// Iterates all files inside folder
while ($handlerFile = readdir($handlerDir)) {
// If the files are indeed files, and not directories
if (is_file($filePath . $handlerFile) && $handlerFile != "." && $handlerFile != "..") {
// Zip them
$zip -> addFile($filePath . $handlerFile, $fileDir . $handlerFile);
};
};
// Close the file
$zip -> close();
};
sendZip($zipFile);
};
?>
As pointed out by #epascarello here, a simple GET request solves this.
Even though I was afraid of not using POST because of an SQL injection attack, the variables pass through a pgsql2shp program, and that only accepts a valid schema and table names, so no need to worry about that as much.
I am currently using this KISS code, and it works:
const getData = (schema, table) => {
window.location='php/get_data.php?schema=' + schema + '&table=' + table;
};
In PHP, it's only needed a small change from the POST reception to a GET reception. The variables are already separated, no need to decode anything:
$schema = $_GET['schema'];
$table = $_GET['table'];
This goes to show that sometimes, we look so deep into a problem that the solution is right in front of us.

PHP And AJAX Download of a few MB file freezes website

Hello ive searched everywhere to find the answer however none of the solutions ive tried helped
What i am building is a site which connects to Youtube to allow users to search and download videos as MP3 files. I have built the site with the search etc however i am having a problem with the download part (ive worked out how to get the youtube audio file). The format for the audio is originally audio/mp4 so i need to convert it to mp3 however first i need to get the file on the server
So on the download page ive made a script that sends an ajax request to the server to start downloading the file. It then sends a request to a different page every few seconds to find out the progress and update it on the page the user is viewing.
However the problem is while the video is downloading the whole website freezes (all the pages dont load until the file is fully downloaded) and so when the script tries to find out the progress it cant until its fully done.
The file which downloads:
<?php
session_start();
if (isset($_GET['yt_vid']) && isset($_GET['yrt'])) {
set_time_limit(0); // to prevent the script from stopping execution
include "assets/functions.php";
define('CHUNK', (1024 * 8 * 1024));
if ($_GET['yrt'] == "gphj") {
$vid = $_GET['yt_vid'];
$mdvid = md5($vid);
if (!file_exists("assets/videos/" . $mdvid . ".mp4")) { // check if the file already exists, if not proceed to downloading it
$url = urlScraper($vid); // urlScraper function is a function to get the audio file, it sends a simple curl request and takes less than a second to complete
if (!isset($_SESSION[$mdvid])) {
$_SESSION[$mdvid] = array(time(), 0, retrieve_remote_file_size($url));
}
$file = fopen($url, "rb");
$localfile_name = "assets/videos/" . $mdvid . ".mp4"; // The file is stored on the server so it doesnt have to be downloaded every time
$localfile = fopen($localfile_name, "w");
$time = time();
while (!feof($file)) {
$_SESSION[$mdvid][1] = (int)$_SESSION[$mdvid][1] + 1;
file_put_contents($localfile_name, fread($file, CHUNK), FILE_APPEND);
}
echo "Execution time: " . (time() - $time);
fclose($file);
fclose($localfile);
$result = curl_result($url, "body");
} else {
echo "Failed.";
}
}
}
?>
I also had that problem in the past, the reason that it does not work is because the session can only be once open for writing.
What you need to do is modify your download script and use session_write_close() each time directly after writing to the session.
like:
session_start();
if (!isset($_SESSION[$mdvid])) {
$_SESSION[$mdvid] = array(time(), 0, retrieve_remote_file_size($url));
}
session_write_close();
and also in the while
while (!feof($file)) {
session_start();
$_SESSION[$mdvid][1] = (int)$_SESSION[$mdvid][1] + 1;
session_write_close();
file_put_contents($localfile_name, fread($file, CHUNK), FILE_APPEND);
}

write a file on local disk from web app [duplicate]

I am trying to create and save a file to the root directory of my site, but I don't know where its creating the file as I cannot see any. And, I need the file to be overwritten every time, if possible.
Here is my code:
$content = "some text here";
$fp = fopen("myText.txt","wb");
fwrite($fp,$content);
fclose($fp);
How can I set it to save on the root?
It's creating the file in the same directory as your script. Try this instead.
$content = "some text here";
$fp = fopen($_SERVER['DOCUMENT_ROOT'] . "/myText.txt","wb");
fwrite($fp,$content);
fclose($fp);
If you are running PHP on Apache then you can use the enviroment variable called DOCUMENT_ROOT. This means that the path is dynamic, and can be moved between servers without messing about with the code.
<?php
$fileLocation = getenv("DOCUMENT_ROOT") . "/myfile.txt";
$file = fopen($fileLocation,"w");
$content = "Your text here";
fwrite($file,$content);
fclose($file);
?>
This question has been asked years ago but here is a modern approach using PHP5 or newer versions.
$filename = 'myfile.txt'
if(!file_put_contents($filename, 'Some text here')){
// overwriting the file failed (permission problem maybe), debug or log here
}
If the file doesn't exist in that directory it will be created, otherwise it will be overwritten unless FILE_APPEND flag is set.
file_put_contents is a built in function that has been available since PHP5.
Documentation for file_put_contents
fopen() will open a resource in the same directory as the file executing the command. In other words, if you're just running the file ~/test.php, your script will create ~/myText.txt.
This can get a little confusing if you're using any URL rewriting (such as in an MVC framework) as it will likely create the new file in whatever the directory contains the root index.php file.
Also, you must have correct permissions set and may want to test before writing to the file. The following would help you debug:
$fp = fopen("myText.txt","wb");
if( $fp == false ){
//do debugging or logging here
}else{
fwrite($fp,$content);
fclose($fp);
}

web services and phonegap : best practices

Hi I am using phonegap for crossed plateform development (I use angularJS as JS framework).
I want to use a web service to access to a list of positions from my database (mysql) on my website.
The problem is that the solution I found is not secure at all:
Javascript
var xhr;
if (window.XMLHttpRequest)
xhr = new XMLHttpRequest();
else
xhr = ActiveXObject("Microsoft.XMLHTTP");
xhr.open("GET", "http://localhost:8888/MAMP_Site/0/test.php", true);
xhr.send(null);
xhr.onreadystatechange = function() {
if (xhr.readyState == 4 && (xhr.status == 200 || xhr.status == 0)) {
console.log("Ready State4: Json Textual Data retrieved");
handleData(xhr.responseText); // Json Textual Data
}
};
function handleData(data)
{
var jsonData;
console.log("ReceivedData from WebService:"+data);
jsonData = eval('(' + data + ')');
$scope.lastUpdate = jsonData[0];
$scope.jsonData = jsonData[1];
$scope.$apply();
}
PHP (used as "web service")
<?php
header('Access-Control-Allow-Origin: *');
header("Content-Type: text/plain");
class UserInfo {
public $id = "";
public $name = "";
public $username = "";
public $timestamp = "";
public function __construct($_id, $_name, $_username, $_timestamp) {
$this->id = $_id;
$this->name = $_name;
$this->username = $_username;
$this->timestamp = $_timestamp;
}
}
$db = mysql_connect('localhost:8889', 'root', 'root');
mysql_select_db('myDbName',$db);
$sql = 'SELECT id,name,username,timestamp FROM positions_test';
$req = mysql_query($sql) or die('Erreur SQL !<br>'.$sql.'<br>'.mysql_error());
$dataArray = array();
while($data = mysql_fetch_assoc($req)) {
$dataArray[]= new UserInfo($data['id'],$data['name'],$data['username'],$data['timestamp']);
}
//Last Modified Time
$sql = "SELECT UPDATE_TIME FROM information_schema.tables WHERE TABLE_SCHEMA = 'myDbName'AND TABLE_NAME = 'positions_test'";
$req = mysql_query($sql) or die('Erreur SQL !<br>'.$sql.'<br>'.mysql_error());
$data = mysql_fetch_assoc($req)["UPDATE_TIME"];
$jsonDataArray = array($data, $dataArray);
echo json_encode($jsonDataArray);
mysql_close();
?>
Basically the PHP return a JSON (as text), and I get it (as text) in my JS. Then I evaluate it as a JSON.
The question
Security concern
As the application is made with cordova, all JS and Html source code can be viewed and so the URL of my php "web service". It means that anybody who have the adress can access to the Json File. Even if this data is public (in my case) I want it to be only accessible from my app (this way I can for instance avoid a bot to store all of this data and spam).
Token or user-agent
As there is no authentification for users is there any way for my webservice to know where the request come from?
I thought using a token to ensure that the request come from my app but once again as the source code can be viewed, anybody could see the token or the code to generate it.
Maybe using user-agent to know if it is accessed from a mobile device?
Other port than 80
Maybe it would be judicious to choose another port than 80 to connect to my web service, but how can I select my connexion port?
Best practice
The main point would actually be, what are the best practice for web services on phonegap (cordova) ?
Should I use SSL, Https?
Should I use a real web service instead of a simple php page and XMLHTTPRequest? If yes, which one?
And of course how building properly and securely my web service ?
I know this is a long post, but I searched the web a for while and I found a lot of interesting stuff but nothing really concret on the best practices to build your web services for a phonegap application (with no user authentification)
You could try to obfuscate it, or a a lot of other things, but in the end you have to receive it in the client side and therefore there is nothing you can do to fully prevent him from reading your data, seeing your client side code or spamming your service.
The best you can do to make sure that the service is safe is: make sure the connection to the db does not allow writes, all the software involved is updated regularly and that the queries sent to your service have the syntax and content that you are expecting.

Universal website crawler using PHP [duplicate]

This question already has answers here:
How to show google.com in an iframe?
(9 answers)
Closed 9 years ago.
I want to create a universal website crawler using PHP.
By using my web application, a user will input any URL, will provide input on what he needs to get from given site and will click on Start button.
Then my web application will begin to get data from source website.
I am loading the page in iframe and using jQuery I get class and tags name of specific area from user.
But when I load external website like ebay or amazon etc it does not work, as these site are restricted. Is there any way to resolve this issue, so I can load any site in iFrame? Or is there any alternative to what I want to achieve?
I am actually inspired by mozenda, a software developed in .NET, http://www.mozenda.com/video01-overview/.
They load a site in a browser control and it's almost the same thing.
You can't crawl a site on the client-side if the target website is returning the "X-Frame-Options: SAMEORIGIN" response header (see #mc10's duplicate link in the question comments). You must crawl the target site using server-side functionality.
The following solution might be suitable if wget has all of the options that you need. wget -r will recursively crawl a site and download the documents. It has many useful options, like translating absolute embedded urls to relative, local ones.
Note: wget must be installed in your system for this to work. I don't know which operating system you're running this on, but on Ubuntu, it's sudo apt-get install wget to install wget.
See: wget --help for additional options.
<?php
$website_url = $_GET['user_input_url'];
//doesn't work for ipv6 addresses
//http://php.net/manual/en/function.filter-var.php
if( filter_var($website_url, FILTER_VALIDATE_URL) !== false ){
$command = "wget -r " + escapeshellarg( $website_url );
system( $command );
//iterate through downloaded files and folders
}else{
//handle invalid url
}
You can sub in what element you're looking for in the second foreach loop within the following script. As is the script will gather up the first 100 links on cnn's homepage and put them in a text file named "cnnLinks.txt" in the same folder in which this file is located.
Just change the $pre, $base, and $post variables to whatever url you want to crawl! I separated them like that to change through common websites faster.
<?php
set_time_limit(0);
$pre = "http://www.";
$base = "cnn";
$post = ".com";
$domain = $pre.$base.$post;
$content = "google-analytics.com/ga.js";
$content_tag = "script";
$output_file = "cnnLinks.txt";
$max_urls_to_check = 100;
$rounds = 0;
$domain_stack = array();
$max_size_domain_stack = 1000;
$checked_domains = array();
while ($domain != "" && $rounds < $max_urls_to_check) {
$doc = new DOMDocument();
#$doc->loadHTMLFile($domain);
$found = false;
foreach($doc->getElementsByTagName($content_tag) as $tag) {
if (strpos($tag->nodeValue, $content)) {
$found = true;
break;
}
}
$checked_domains[$domain] = $found;
foreach($doc->getElementsByTagName('a') as $link) {
$href = $link->getAttribute('href');
if (strpos($href, 'http://') !== false && strpos($href, $domain) === false) {
$href_array = explode("/", $href);
if (count($domain_stack) < $max_size_domain_stack &&
$checked_domains["http://".$href_array[2]] === null) {
array_push($domain_stack, "http://".$href_array[2]);
}
};
}
$domain_stack = array_unique($domain_stack);
$domain = $domain_stack[0];
unset($domain_stack[0]);
$domain_stack = array_values($domain_stack);
$rounds++;
}
$found_domains = "";
foreach ($checked_domains as $key => $value) {
if ($value) {
$found_domains .= $key."\n";
}
}
file_put_contents($output_file, $found_domains);
?>
Take a look at using the file_get_contents function in PHP.
You may have better success in retrieving the HTML for a given site like this:
$html = file_get_contents('http://www.ebay.com');

Categories

Resources