How to cache remote images in a NodeJS / AngularJS app? - javascript

The scenario:
A user posts a link to a website
A little AngularJS service fetches a preview, including an image
This information (including the remote URL of the image) is copied to a hidden form and sent to the server when the user clicks submit.
Now I have to download the image to my server for the preview. Otherwise I would be hotlinking them which isn't good and could also cause problems later.
Because of the cross-origin policy I guess this can only be done server side.
My idea so far is to create a little API where I pass in the URL to the image and the server checks if a local copy exists and if not downloads one on the fly
http://example.com/staticapi/v1/get?img_url=ENCODED-IMG-URL
What I don't like about this idea is that a malicious user could just bombard this system with URLs.
An alternative would be to save the remote URL temporarily and let a background job download any remote images that haven't been processed yet.
How would you approach this?

Related

How to know if a JS file is loaded on server side?

I'm a developer for the website Friconix, a collection of free icons. Users can include our JS file on their web pages as explain here : https://friconix.com/start/.
I would like to gather statistics on clients using our tools. How to know, on server side, information on pages (URL or at least domains) that request our JS file ?
It's important to explain that, for decreasing the loading time, the JS file is not dynamically generated. There is no PHP file loaded every time the file is requested. The file is saved in plain text on our server. I wonder if the right solution is not to add something in the .htaccess file ?
Since the script is requested from your server every time a user loads a browser-page you can track who and how often that path is requested.
A simple approach is that it will be present in you request log files. So you can create a script and read your log files every so often.
A second approach is to setup a special rule/location in nginx/apache/which-ever-server-you-are-running
A third approach is to serve the script via CDN that has all these attributes built in (ie. CloudFront)
This can be done via a simplistic REST API call from the script. Thus when your script will load it will call the rest API via an AJAX or XHR call. The request can contain a unique client ID. On the server-side, you can implement a simple API that will accept these requests and store them by extracting the necessary information for analytics.
All the information like domains and IP about the client can be gathered from the API request or requests which will be made from clients page.
Reference - How do I call a JavaScript function on page load?

How to prevent actual file path showing from inspecting network terminal

"I'm playing audio file in my web application. If i inspect in the network i can able to get the exact file path of audio. Getting actual file path of audio leads to security problem. How to prevent this.
You can't.
You can make this process difficult for the user but if your app in the browser can get any data from server then it can be done without your app too.
You cannot prevent the file path from appearing - but there are ways to prevent it from being used outside the application - or at least being used easily outside the application.
The steps are pretty much the same for any file (Blob) so it would be the same for an audio or video file.
You can prevent it by downloading the file with XHR and converting it to a blob URL - which is what Youtube and Netflix do for video - that way you have full control over the authentication and download process (make sure you download and merge each chunk with a range request to create a full video while streaming the information).
for example: for youtube this is a video URL - it is also revoked after a while so you probably can't open this file with your browser.
This URL is not protected - but there are protected video files on youtube for their premium services which will expect certain headers and cookies to be present.
In order to make this fully protected, you would have to generate a onetime token for each chunk.
https://r6---sn-4g5ednll.googlevideo.com/videoplayback?expire=1563544755&ei=U3gxXZKKFoKZgQeOuYuoBA&ip=62.41.73.80&id=hHW1oY26kxQ.348&itag=140&source=yt_live_broadcast&requiressl=yes&mm=44%2C26&mn=sn-4g5ednll%2Csn-5hne6nsz&ms=lva%2Conr&mv=u&mvi=5&pl=24&live=1&hang=1&noclen=1&mime=audio%2Fmp4&gir=yes&compress=yes&mt=1563522739&fvip=4&keepalive=yes&c=WEB&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Clive%2Chang%2Cnoclen%2Cmime%2Cgir%2Ccompress&sig=ALgxI2wwRQIhANT3eXWZbPN0bFz5j7Zs56veNXpvNuXcvt0yOzwlNx-zAiB4ZjcCA7GTn6k16AiwQcOq2XaUkA3SWFuIdskws8MqBQ%3D%3D&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl&lsig=AHylml4wRgIhAJ3FqpB1r_T_ErODyHxwcWZZdhxeVJ1yTdovfn0BPBmuAiEA5KbeCcZXe8C7l6W5IjmXWXwN1VXvgwiP7Jn2tWdReoc%3D&alr=yes&cpn=rp9-DtQeCOUxaQZ9&cver=2.20190718&sq=1134933&rn=10&rbuf=15966
Is transformed at runtime to this, and this URL is only local to the user session on the website and cannot be used to download manually unless they inject a script to the website.
blob:https://www.youtube.com/f49abcd9-d431-42f0-8e04-4e156a78a8cc
Now you need to make sure the video is only accessible by authenticating the user the normal way (HTTP only cookies - so users can't get it easily) + CSRF + CORS
CORS - only accept requests with access-control-allow-origin: https://www.youtube.com
JWT - only accept validated users (cookies)
CSRF - token to prevent cross-site forgery (i.e users can't access it without going through the website) - they get invalidated after each user session.
SSL - It is always a good idea to have SSL to prevent easy interception
This is more difficult than I make it sound, and smart hackers can still download the file with extra steps (there are downloaders for Youtube/Udemy/Netflix etc...) - but for the most part, it will prevent most users from easily downloading it.
** In case this isn't clear - there is no solution that involves only the client - the fix must come from the server-side to prevent access to the file.

Run a script in a google cloud from a local html page

I would like to run a script in a google cloud server using a local HTML page.
To be more clear the steps would be:
open a local HTML page on my local computer.
push a button that triggers a script in my google cloud server.
the script creates a file in the server that I can download pressing
another button.
I'm new in this field and I don't know where to start.
How do I connect to the server via HTML? (PHP?, Javascript?)
How does the authorization process work?
There are several languages and strategies that you can use.
You can use locally Javascript or PHP (it needs installation and configuration) that will allow you, for example, to make an HTTP request (it may also be another protocol), to a script (that can be in PHP or Javascript or others) on the server, which, upon receiving the request, processes and generates a file for a specific path.
Then on that other button you make a request to that path to download the file.
My suggestion is to choose the languages and implement with these to understand the process.
Create an HTML page, put a button.
Attach a function to button onClick to send an Ajax request
That would cause cross server request challenge for you down the road..
You can simply put a URL from your local web page styled as a button to your Google Cloud hosted application.
Create the file on the server side and you can set an HTML header
Content-Disposition: attachment; filename="results.csv"
to make a file downloaded to the user end.

how to get images from other web page and show in my website

I just want to know how to get images from other web page and show in my website.
Case flow is:
Type some page URL in text box and submit
Collect all images in that web page (not in entire site) and show them in my webpage
So, you need to get images from page, and the input data is thh address of that page. Well, you have two solutions:
I. If this is functionality for your site which others will use, then plain JavaScript is not enough, because browser's privacy policies block getting such data from other pages. What you need in this case is to send the URL to a script on your server, which will download that page, parse it for s and return you the list of image srcs.
How exactly to do this is a pretty complicated question, for it depends on your site's serever-side programming language. Anyway such functionality would consist of client side javascript using AJAX techniques and server site script (e.g. php). Client script which is pretty much straight-forward.
On client side your js has to:
1. Get desired URLs
2. Send them to server
3. Wait for server's response (which contains srcs of images on desired page)
4. Create img tags with srcs which you got from server script
Keywords for this to google are, for example, AJAX, XmlHttpRequest and JSONP (sorry if you already know that :)
On server side your (php|ruby|python|perl|brainfuck) has to:
1. Get page URL from javascript code at step 2
2. Download a page by that url
3. Parse it looking for img tags and their srcs
4. Send list of srcs (in XML, JSONP or any other form) back to client
II. If you need to get images from other pages only for your personal use, you can write an extension for your browser. This way doesn't require any server side scripts.
If you want do scrape other websites with javascript, you should create a server side script which can act as proxy or you can use YQL.
Here's my answer for cross domain ajax call with YQL,
Cross Domain Post method ajax call using jquery with xml response
First of all check for Copyright. Copy only if the image is provided by the owner for free use. Also read and understand the license of usage.
If the image is free to use as stated by the owner under license then download the image and then use it. Also, please don't forget to keep copy of the license and the website url from where you downloaded the image.
Download and then use is suggested so that if tomorrow the other website shuts down then your website remains unaffected.
Last but not the least, try to design/ shoot your own images. Even if they are not as good as others at least they are genuine.

What's the security risk of having javascript access an external image?

Using javascript one cannot convert an image (hosted on a different domain than the one the javascript comes from) into a canvas.
What's the security risk with that? It can't just be to avoid phishing, right?
Same origin policy stops any remote data from being accessible by a different domain. One of the main attacks this stops is being able to circumvent a user's login by waiting for them to be logged into another site, and then piggy-back your request on their authenticated session.
Whether the data loaded is an HTML snippet, an image file or anything else, it's blocked so you can't take advantage in any way (for example, by inspecting the pixel data of an image retrieved this way)
There is one tricky attack vector connected with external images: someone can post image which will be loaded from the external resource, which they control. After some time this url can be changed to return the request for the basic http authentication. So the other users will see windows requesting their login and password. Some users, especially non-experienced ones can enter the credentials of the attacking resources which will be sent to the attacker. So be careful with external resources.

Categories

Resources