So currently my website is loading an JSON File from my server, looking for the data it needs and then showing it up on the screen as well as downloading and showing an image related to that info from a public API. I was wondering how would I go about storing the API downloaded image to my own servers database so before every image fetch it would check for the image in my server and if it doesn't exist, it would go to the public API and download and store it in my servers database?
This code below is a sample, when the site is loaded it pulls a random card from the JSON and displays its info along with the image. I want to download the image from the api, store it in my servers database and then check for said image before downloading as to reduce API calls to a minimum.
function RandomCard() {
const RC = Math.floor(Math.random() * cardAPIListings.data.length);
var cardShowcase = document.getElementById("randomcardshowcase");
var randomCardImage = document.getElementById('randomImage');
switch (cardAPIListings.data[RC].type) {
case "Spell Card":
case "Trap Card":
cardShowcase.innerHTML = '\"' +cardAPIListings.data[RC].name + '\"' + '<br><br>' + '[' +
cardAPIListings.data[RC].race + ' / ' + cardAPIListings.data[RC].type + ']' + '<br>' +
'Attri: ' + cardAPIListings.data[RC].attribute + '<br><br>' + 'Effect: ' + '<br>' +
cardAPIListings.data[RC].desc;
randomCardImage.src = cardAPIListings.data[RC].card_images[0].image_url;
break;
Edit:
The PUBLIC API that I'm using says this:
[ Card Images
Users who pull our card images directly from our server instead of our google cloud repository will be immediately blacklisted.
Card images can be pulled from our Google Cloud server but please only pull an image once and then store it locally(I Assume this means Store it in your Own Server/API Database).
If we find you are pulling a very high volume of images per second then your IP will be blacklisted and blocked.]
So I need a way to store the images in my own location as they are being downloaded from the Public API so I Dont get blacklisted.
Unless I'm misunderstanding your question, only store images and other binary data to the file system.
No reason to burden database with big binary data. Only store file link (or URL) in the database.
I try to extract and display train connections from the DeutscheBahn webside www.reiseauskunft.de to show them on a Infodisplay (that just shows a simple html page with some javascript.
So i want to put these info (next available connections) in my html page.
DeutscheBahn provides a "kind" of API ??? or at least that looks like an API:
www.reiseauskunft.bahn.de/bin/query.exe/dn?S=MainzHbf&Z=Frankfurt(Main)Hbf&timeSel=depart&start=1
This link works and delivers a full webpage with the next three conections from (S)tart station to (Z) target station and gets the acctual time as the depart time (the start=1 parameter just executes the request).
You can find more infos about the parameters here (only german)
www.geiervally.lechtal.at/sixcms/media.php/1405/Parametrisierte%20%DCbergabe%20Bahnauskunft(V%205.12-R4.30c,%20f%FCr.pdf
Because html table seems no longer supported i found the info to use htmlstring (YQL: html table is no longer supported)
I changed the example to my needs:
var site = "http://www.reiseauskunft.bahn.de/bin/query.exe/dn?S=MainzHbf&Z=Frankfurt(Main)Hbf&timeSel=depart&start=1";
var yql = "select * from htmlstring where url='" + site;
var resturl = "http://query.yahooapis.com/v1/public/yql?q=" + encodeURIComponent(yql) + "&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys";
but got the description in browser
"Query syntax error(s) [line 1:140 mismatched character ' ' expecting ''']"
(??? at this position i cant find a " "and would not put a "'"???)
in yql console i put the following
select * from htmlstring where url='http://www.reiseauskunft.bahn.de/bin/query.exe/dn?S=MainzHbf&Z=Frankfurt(Main)Hbf&timeSel=depart&start=1'
and there i got the Exception: Redirected to a robots.txt restricted URL.
Do both messages correspond the same??? or can i bypass the robots.txt message (does the yql function react like a robot for the page reiseauskunft.de?)
Is there a chance to retrieve the train connections with yql ?
Thanks in advance
Edit: it seems my approuch with yql will not work so i will try another approuch - question closed?!
I am using the Google maps API v3 to create a portfolio.
Question :
Is there a way to generate the default image link, that would work on every google server, or a way to know what server is used so I can generate the link accordingly ?
Here is an example of what I'm currently doing. It may or may not help you finding an answer
User path :
The user enters the address of his business.
An iframe is displayed with the interior view of this business
The user can navigate on this iframe to select his default picture
From the view selected with the iframe, I can create an image URL directly from the Google servers that I set as default image.
At the moment, this URL can be (JS):
var image = "https://geo3.ggpht.com/cbk?panoid=" + panoId + "&output=thumbnail&cb_client=search.LOCAL_UNIVERSAL.gps&thumb=2&w=689&h=487&yaw=" + povHeading + "&pitch=" + povPitch + "&thumbfov=" + fov;
or
var image = "https://lh5.googleusercontent.com/" + panoId + "/w689-h487-k-no-pi"+povPitch+"-ya"+povHeading+"-ro0-fo"+fov+"/";
This worked for a vast majority of the cases, but as more people are using the service, some special cases appeared (example link) :
https://lh3.ggpht.com/-1dwRgcXpyYk/WS7bUYtLEdI/AAAAAAAAObA/zd-aK-rfWxYvA302eg6WT7qQoEKRrUxGgCLIB/x2-y2-z3/p
I am saving the link for the both first cases but I have not found a general rule that can be applied.
The user that entered his business having this 3rd example link is getting a 404 img not found atm.
Here is the code I'm currently using, if it can help understanding the question (JS):
function generateImg() {
/* here I get all the vars used to create the image */
//generate img link
var image = "https://geo3.ggpht.com/cbk?panoid=" + panoId + "&output=thumbnail&cb_client=search.LOCAL_UNIVERSAL.gps&thumb=2&w=689&h=487&yaw=" + povHeading + "&pitch=" + povPitch + "&thumbfov=" + fov;
//if img does not exist
UrlExists(image, function(status){
if(status === 404){
// 404 not found
var image = "https://lh5.googleusercontent.com/" + panoId + "/w689-h487-k-no-pi"+povPitch+"-ya"+povHeading+"-ro0-fo"+fov+"/";
}
});
}
You can get the consistent result within the official API:
var image = "https://maps.googleapis.com/maps/api/streetview?size=400x400&pano=" + panoId + "&fov=90&heading=235&pitch=10&key=" + API_KEY;
Backstory:
We are an affiliate merchant and received a tracking code to implement on the heading page of our sales page. However the tracking code requires the amount and the order ID to be filled dynamically.
Our current platform doesn't provide liquid fields to assign in the tracking code, so I had to figure out what fields exists and how to get them to propagate the SRC url which is the tracking code.
Using Zapier I was able to pull the key fields that provide the information we need:
subscription_id: sub_AIhebhUVf1aV4z - The Tracking ID
contact__contact_profile__known_ltv: 7.95 - The amount
I'm not sure what code Zapier is using to pull this information, I'm assuming it's a GET. You can set these fields as InputData and recall the information into Javascript but I am not having any luck putting the script together.
Input Data
amount = contact__contact_profile__known_ltv
tracking = subscription_id
var pixel ='<img ' + 'src="https://shareasale.com/sale.' +
'cfm?amount='+ inputData.amount + '&tracking=' +
inputData.tracking + '&transtype=SALE&' +
'merchantID=XXXX"'+
' width="1" height="1">';
document.write(pixel);
Anyone have an idea why this code doesn't work and how to make it run? much appreciated.
the tracking code provided by the affiliate is:
<img src="https://shareasale.com/sale.cfm?amount=AMOUNT&tracking=SUBSCRIPTION_ID&transtype=SALE&merchantID=XXXX" width="1" height="1">
I don't see anything wrong in combining vars but I think it is not working because you are only creating a string, not an image. instead try below
var pixel ='https://shareasale.com/sale.' +
'cfm?amount='+ inputData.amount + '&tracking=' +
inputData.tracking + '&transtype=SALE&' +
'merchantID=XXXX"';
var img = new Image;
img.src = pixel;
img.height = 1;
img.width = 1;
document.body.appendChild(img);
I am using the rotten tomatoes API, which is fairly straight forward. The following is my basic code:
var apikey = "xxxxx";
function queryForMovie(query) {
queryUrl = "http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=" + apikey + "&q=" + encodeURI(query);
$.ajax({
url: queryUrl,
dataType: "jsonp",
success: queryCallback
});
}
function queryCallback(data) {
var el = $('#movie-listings');
$.each(data.movies, function(index, movie) {
el.append('img src="' + movie.posters.original + '" alt="' + movie.title + '"');
})
};
$(document).on("load", queryForMovie("Star Wars"));
However, this gives back a very small image.
What would be a good way to get a larger sized image, while limiting requests where possible?
** UPDATE **
Rotten Tomatoes has made configuration changes such that trying to reference cloudfront urls directly no longer works. Therefor, this solution no longer works.
Such is the danger of using non-sanctioned workarounds.
Does anybody know of a good service for getting movie posters?
Original non-working answer:
Even though the Rotten Tomatoes API lists four separate images in a movies poster object (thumbnail,profile,detailed, and original), they are all, currently, identical URLs:
"posters": {
"thumbnail": "http://resizing.flixster.com/AhKHxRwazY3brMINzfbnx-A8T9c=/54x80/dkpu1ddg7pbsk.cloudfront.net/movie/11/13/43/11134356_ori.jpg",
"profile": "http://resizing.flixster.com/AhKHxRwazY3brMINzfbnx-A8T9c=/54x80/dkpu1ddg7pbsk.cloudfront.net/movie/11/13/43/11134356_ori.jpg",
"detailed": "http://resizing.flixster.com/AhKHxRwazY3brMINzfbnx-A8T9c=/54x80/dkpu1ddg7pbsk.cloudfront.net/movie/11/13/43/11134356_ori.jpg",
"original": "http://resizing.flixster.com/AhKHxRwazY3brMINzfbnx-A8T9c=/54x80/dkpu1ddg7pbsk.cloudfront.net/movie/11/13/43/11134356_ori.jpg"
}
According to RT, high-resolution poster images are no longer available via the APIs to maintain focus on ratings and reviews, more detailed content.
However, if you're willing to "order off menu," you can still get at the full resolution image. The part of the poster image urls following /54x80/ is the cloudfront url for the original image:
http://resizing.flixster.com/AhKHxRwazY3brMINzfbnx-A8T9c=/54x80/dkpu1ddg7pbsk.cloudfront.net/movie/11/13/43/11134356_ori.jpg
...becomes...
http://dkpu1ddg7pbsk.cloudfront.net/movie/11/13/43/11134356_ori.jpg
A javascript implementation might look something like this:
// movie = RT API movie object
var original = movie.posters.original.replace(/^.*?\/[\d]+x[\d]+\//,"http://");
This image will ordinarily be much much larger than 54x80, and it may not be feasible to load and display large lists of these images. Trying to modify the url resizing.flixster.com url doesn't work--there appears to be some kind of resource dependent hash involved. If you want to be able to downscale the images, you need to set up or find an image proxy service. I found that Pete Warden's article on resizing and caching images with cloudfront to be of great help.
An example using the service he set up in the article might look like http://imageproxy.herokuapp.com/convert/resize/200x285/source/http%3A%2F%2Fdkpu1ddg7pbsk.cloudfront.net%2Fmovie%2F11%2F13%2F43%2F11134356_ori.jpg
In javascript, this would look something like:
// Match url: http://[some.kind/of/url/[height]x[width]/[original.cloudfront/url]
var url_parts = movie.posters.original.match(/^.*?\/([\d]+)x([\d]+)\/(.*)/);
var ratio = url_parts[1] / url_parts[2], // Determine the original image aspect ratio from the resize url
size = 200, // This is the final width of image (height is determined with the ratio)
original = "http://" + url_parts[3],
wxh = [size, Math.round(size/ratio)].join("x");
// Construct the final image url
movie.posters.original = [
"http://imageproxy.herokuapp.com/convert/resize/",
wxh,
"/source/",
encodeURIComponent(original)
].join("");
// The first request of this URL will take some time, as the original image will likely need to be scaled to the new size. Subsequent requests (from any browser) should be much quicker, so long as the image remains cached.
NOTE: Doing something like this depends on Rotten Tomatoes keeping their resize urls the same form (resize url + [width]x[height] + encoded cloudfront url). Unless you set up your own image proxy service, you are also at the mercy of the proxy owner, as far as uptime, performance, security, and image quality is concerned.