My Webapp is running with a https connection / ssl certificate. I need to show pictures to the user. I get the links to the picture by an API request and link them afterwards. Sadly, the pictures address is http, so the browser shows that there are unsecure parts on the site, which mustn't be...
I could download the pictures and link to the proper picture afterwards, but I think this might be a little timeconsuming and not the best way to handle this.
Does somebody know a better solution for this? I'm using php, jquery and javascript.
You'll have to write a proxy on your server and display all the images through it. Basically your URLs should be like:
$url = 'http://ecx.images-amazon.com/images/I/51MU5VilKpL._SL75_.jpg';
$url = urlencode($url);
echo '<img src="/proxy.php?from=' . $url . '">';
and the proxy.php:
$cache = '/path/to/cache';
$url = $_GET['from'];
$hash = md5($url);
$file = $cache . DIRECTORY_SEPARATOR . $hash;
if (!file_exists($file)) {
$data = file_get_contents($url);
file_put_contents($file, $data);
}
header('Content-Type: image/jpeg');
readfile($file);
Ok, I've got a streaming example for you. You need to adapt it to your needs, of course.
Suppose you make a php file on your server named mws.php with this content:
if (isset($_GET['image']))
{
header('Content-type: image/jpeg');
header('Content-transfer-encoding: binary');
echo file_get_contents($_GET['image']);
}
Look for any image on the web, for instance:
http://freebigpictures.com/wp-content/uploads/2009/09/mountain-stream.jpg
now you can show that image, as if it was located on your own secure server with this url:
https://<your server>/mws.php?image=http://freebigpictures.com/wp-content/uploads/2009/09/mountain-stream.jpg
It would of course be better to store the image locally if you need it more than once, and you have to include the correct code to get it from Amazon MWS ListMatchingProducts, but this is the basic idea.
Please don't forget to secure your script against abuse.
Related
Ok, so for about a week now I've been doing tons of research on making xmlhttprequests to servers and have learned a lot about CORS, ajax/jquery request, google feed api, and I am still completely lost.
The Goal:
There are 2 sites in the picture, both I have access to, the first one is a wordpress site which has the rss feed and the other is my localhost site running off of xampp (soon to be a published site when I'm done). I am trying to get the rss feed from the wordpress site and display it on my localhost site.
The Issue:
I run into the infamous Access-Control-Allow-Origin error in the console and I know that I can fix that by setting it in the .htaccess file of the website but there are online aggregators that are able to just read and display it when I give them the link. So I don't really know what those sites are doing that I'm not, and what is the best way to achieve this without posing any easy security threats to both sites.
I highly prefer not to have to use any third party plugins to do this, I would like to aggregate the feed through my own code as I have done for an rss feed on the localhost site, but if I have to I will.
UPDATE:
I've made HUGE progress with learning php and have finally got a working bit of code that will allow me to download the feed files from their various sources, as well as being able to store them in cache files on the server. What I have done is set an AJAX request behind some buttons on my site which switches between the rss feeds. The AJAX request POSTs a JSON encoded array containing some data to my php file, which then downloads the requested feed via cURL (http_get_contents copied from a Github dev as I don't know how to use cURL yet) link and stores it in a md5 encoded cache file, then it filters what I need from the data and sends it back to the front end. However, I have two more questions... (Its funny how that works, getting one answer and ending up with two more questions).
Question #1: Where should I store both the cache files and the php files on the server? I heard that you are supposed to store them below the root but I am not sure how to access them that way.
Question #2: When I look at the source of the site through the browser as I click the buttons which send an ajax request to the php file, the php file is visibly downloaded to the list of source files but also it downloads more and more copies of the php file as you click the buttons, is there a way to prevent this? I may have to implement another method to get this working.
Here is my working php:
//cURL http_get_contents declaration
<?php
function http_get_contents($url, $opts = array()) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_USERAGENT, "{$_SERVER['SERVER_NAME']}");
curl_setopt($ch, CURLOPT_URL, $url);
if (is_array($opts) && $opts) {
foreach ($opts as $key => $val) {
curl_setopt($ch, $key, $val);
}
}
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
if (false === ($retval = curl_exec($ch))) {
die(curl_error($ch));
} else {
return $retval;
}
}
//receive and decode $_POSTed array
$post = json_decode($_POST['jsonString'], true);
$url = $post[0];
$xmn = $post[1]; //starting item index number (i.e. to return 3 items from the feed, starting with the 5th one)
$xmx = $xmn + 3; //max number (so three in total to be returned)
$cache = '/tmp/' . md5($url) . '.html';
$cacheint = 0; //this is how I set if the feed will be downloaded from the site it is from, or if it will be read from the cache file, I will implement a way to check if there is a newer version of the file on the other site in the future
//if the cache file doesn't exist, download feed and write contents to cache file
if(!file_exists($cache) || ((time() - filemtime($cache)) > 3600 * $cacheint)) {
$feed_content = http_get_contents($url);
if($feed_content = http_get_contents($url)) {
$fp = fopen($cache, 'w');
fwrite($fp, $feed_content);
fclose($fp);
}
}
//parse and echo results
$content = file_get_contents($cache);
$x = new SimpleXmlElement($content);
$item = $x->channel->item;
echo '<tr>';
for($i = $xmn; $i < $xmx; $i++) {
echo '<td class="item"><p class="title clear">' .
$item[$i]->title .
'</p><p class="desc">' .
$desc=substr($item[$i]->description, 0, 250) .
'... <a href="' .
$item[$i]->link .
'" target="_blank">more</a></p><p class="date">' .
$item[$i]->pubDate .
'</p></td>';
}
echo '</tr>';
?>
Hello ive searched everywhere to find the answer however none of the solutions ive tried helped
What i am building is a site which connects to Youtube to allow users to search and download videos as MP3 files. I have built the site with the search etc however i am having a problem with the download part (ive worked out how to get the youtube audio file). The format for the audio is originally audio/mp4 so i need to convert it to mp3 however first i need to get the file on the server
So on the download page ive made a script that sends an ajax request to the server to start downloading the file. It then sends a request to a different page every few seconds to find out the progress and update it on the page the user is viewing.
However the problem is while the video is downloading the whole website freezes (all the pages dont load until the file is fully downloaded) and so when the script tries to find out the progress it cant until its fully done.
The file which downloads:
<?php
session_start();
if (isset($_GET['yt_vid']) && isset($_GET['yrt'])) {
set_time_limit(0); // to prevent the script from stopping execution
include "assets/functions.php";
define('CHUNK', (1024 * 8 * 1024));
if ($_GET['yrt'] == "gphj") {
$vid = $_GET['yt_vid'];
$mdvid = md5($vid);
if (!file_exists("assets/videos/" . $mdvid . ".mp4")) { // check if the file already exists, if not proceed to downloading it
$url = urlScraper($vid); // urlScraper function is a function to get the audio file, it sends a simple curl request and takes less than a second to complete
if (!isset($_SESSION[$mdvid])) {
$_SESSION[$mdvid] = array(time(), 0, retrieve_remote_file_size($url));
}
$file = fopen($url, "rb");
$localfile_name = "assets/videos/" . $mdvid . ".mp4"; // The file is stored on the server so it doesnt have to be downloaded every time
$localfile = fopen($localfile_name, "w");
$time = time();
while (!feof($file)) {
$_SESSION[$mdvid][1] = (int)$_SESSION[$mdvid][1] + 1;
file_put_contents($localfile_name, fread($file, CHUNK), FILE_APPEND);
}
echo "Execution time: " . (time() - $time);
fclose($file);
fclose($localfile);
$result = curl_result($url, "body");
} else {
echo "Failed.";
}
}
}
?>
I also had that problem in the past, the reason that it does not work is because the session can only be once open for writing.
What you need to do is modify your download script and use session_write_close() each time directly after writing to the session.
like:
session_start();
if (!isset($_SESSION[$mdvid])) {
$_SESSION[$mdvid] = array(time(), 0, retrieve_remote_file_size($url));
}
session_write_close();
and also in the while
while (!feof($file)) {
session_start();
$_SESSION[$mdvid][1] = (int)$_SESSION[$mdvid][1] + 1;
session_write_close();
file_put_contents($localfile_name, fread($file, CHUNK), FILE_APPEND);
}
I am using PHP to generate a .srt file to add into a HTML5 Video, but is not working and its showing this message on console:
Resource interpreted as TextTrack but transferred with MIME type text/plain: "../subtitles/Test%20Edit.srt".
I am using this JQuery script to make the video work http://www.storiesinflight.com/js_videosub/#download, works fine with the example, but not with my .srt file.
I am creating the .srt file with this code:
$folder = 'subtitles/';
$filename = $this->get_title() . '.srt';
$fp = fopen($folder.$filename,'w');
$i = 1;
$Query = mysql_query("") or die(mysql_error());
while ($a = mysql_fetch_array($Query)) {
$subtitle = new Subtitle($a['idSubtitle']);
$text .= $i . chr(13) . chr(10) . $subtitle->get_start() .
',000 --> ' . $subtitle->get_end() . ',000' . chr(13) . chr(10) .
$subtitle->get_text() . chr(13) . chr(10) . chr(13) . chr(10);
$i++;
}
fwrite($fp,$text);
fclose($fp);
It is generating this file:
1
00:00:01,000 --> 00:00:10,000
Test
2
00:00:12,000 --> 00:00:15,000
Test 2
As Charlotte Dunois said, you're not setting the right MIME type.
There are multiple solutions to this. One would be to load the .srt from a PHP page, that then changes the MIME type using header() before any content is served. Another, usually more practical, solution would be to make your web server figure out the correct MIME type by itself, setting the Content-Type header.
If you're using Apache, look into mod_mime, otherwise look up the documentation for your specific web server.
I have a page with some forms and input fields, which the user fills in and then they are sent to a php page via Ajax and $_POST.
And then the php File writes the output to a txt file. - that works just fine. My problem is I am trying to force the user to download it on that same page that creates the file, after the file is created and I can't seem to get it to work, nothing happens besides the file being created:
Here the code where the .txt File is created (this works nice):
$myfile = fopen("test.txt", "w") or die("Unable to open file!");
foreach ($URLsArray as &$url) {
$row=$SomeArray[$keys[$index]]."\t".$SomeArray[$keys[$index]]."\t".$SomeArray[$keys[$index]]."\t".$SomeArray[$keys[$index]]."\t".$url."\n";
fwrite($myfile, $zeile);
$index = $index + 1;
}
fclose($myfile);
And here the code, where I try to force the download: (after the fclose)
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename='.basename('test.txt'));
header('Expires: 0');
header('Cache-Control: must-revalidate');
header('Pragma: public');
header('Content-Length: ' . filesize('test.txt'));
readfile('test.txt');
exit;
And when I try this I get the error: "Unexpected token A". "A" is the first letter in the test.txt, which is created.
And I know that there are a lot of similar questions, but not one solution worked for me.
I hope someone can help me :)
Instead of doing this backend with PHP you could try to do this on the frontend part. The easiest solution is to write some JavaScript code which adds an iframe to your webpage. The iframe should then have a href to the file you want the user to download.
Okay here my solution: #CBroe thanks for the hint with the background request, I would have tried for ages to get the download work in the php file. So what I did:
In PHP: I echo the filename after the .txt file is created:
echo json_encode(array('filename' => "your_Data".time().".txt"));
Than in JavaScript I read the filename in the success method and put the link in an <a> element. Later that will be a button which is set active.
success : function(data) {
$("#button_get_file_download").attr("href", "urlToFolder"+data.filename);
},
I need to have debugging/logging information for a flash video player.
I need to display the client ip, along with the server ip of the stream... can this be done?
example stream: (actually it's a manifest that, upon load, retrieves the stream)
http://multiplatform-f.akamaihd.net/z/multi/april11/hdworld/hdworld_,512x288_450_b,640x360_700_b,768x432_1000_b,1024x576_1400_m,1280x720_1900_m,1280x720_2500_m,1280x720_3500_m,.mp4.csmil/manifest.f4m
Techs available to me: Javascript, AS3 FLASH.
basically a "whatsmyip" and a reverse ip lookup through flash/javascript.... possible without server side scripts?
Get client IP with JS:
<script type="text/javascript">
function getip(data){
alert(data.ip);
}
</script>
<script type="text/javascript" src="http://jsonip.appspot.com/?callback=getip">
</script>
Or with php:
<?php
echo $_SERVER['REMOTE_ADDR'];
?>
Get server IP only with server side script, like php:
<?php
$ip = gethostbyname('www.example.com');
echo $ip;
?>
I found this online, I don't know if it would help you. Hope it does.