I'm curious if anyone could fill me in on how safe it is to use toDataURL on a user provided image. The basic idea is that User A would upload an image in their browser and it would be converted to an URL, and then ignoring steps in between, eventually User B (as well as other users) would retrieve the URL format where it would be converted back into an image and displayed in User B's browser.
So my question revolves around whether someone could abuse the system to inject code into User B's browser, or otherwise cause havoc. In general, what security considerations are there that must be taken when using toDataURL and then later converting it back?
I'm aware that cross origin images taint a canvas which disallows any methods that involve the data, but I'm not aware of how much of a blanket solution this is. I've read that some browsers don't have this restriction while other's (and even other versions of the same browser) implement this restriction differently depending on the content of the cross origin image.
What I've found in my research so far:
this question where the answer pointed to a great article that looked at it from the perspective of storing the uploaded image on a server.
this question where the answer points out an interesting way to hide a script in an image I'd never seen before, but I'm not sure what vulnerability it creates if I'm not trying to extract a script from that image and run it.
and this link which details a great reason why browser's choose to restrict access to image data for cross origin images. I always assumed it was just about protecting against malicious images, but now realize it also protects against much more.
None of the above have sufficiently approached it from the perspective of one user attacking another user through uploading an image (that doesn't stay as uploaded but instead gets converted to data url) that another user later downloads and views (with img src set to data url, not the malicious user's original upload). 2 is close to answering my question, but as I understand it, the detailed methods wouldn't work without the malicious user also having injected some script into the viewing user's browser.
To go along with this question is an example of what I would like to do including the file uploading/conversion to data url along with a sample data url to try out the importing (this sample url is safe to import and small so it imports quickly):
window.onload = function() {
document.getElementById("convert").onclick = convert;
document.getElementById("import").onclick = importF;
let imageLoader = document.getElementById("imageLoader");
let canvas = document.getElementById("imageCanvas");
let ctx = canvas.getContext("2d");
imageLoader.addEventListener('change', e => {
let reader = new FileReader();
reader.onload = (ee) => {
loadImage("imageCanvas", ee.target.result);
}
reader.readAsDataURL(e.target.files[0]);
}, false);
};
function loadImage(id, src) {
let canvas = document.getElementById(id);
let ctx = canvas.getContext("2d");
let img = new Image();
img.onload = () => {
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0);
}
img.src = src;
}
function convert() {
let canvas = document.getElementById("imageCanvas");
console.log(canvas.toDataURL());
}
function importF() {
let imageImport = document.getElementById("imageImport");
let url = imageImport.value;
loadImage("imageCanvas", url);
}
<label>Upload Image:</label>
<input type="file" id="imageLoader" name="imageLoader"/>
<br/>
<label>Import Image:</label>
<input type="text" id="imageImport" name="imageImport"/>
<br/>
<label>Sample URL:</label>
<code style="user-select: all;"> data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAOCAYAAAAmL5yKAAAApUlEQVQ4T2NkQALyKu5GjP8Z01kZ/1n/+s+kjSyHi80Ik1BQdvf4z8C4nYPx/z819t9M+py/GRj+MzAwgFTgocEGyCl75DEyMEz04f3OEC34lUGY+R8xloPVMMopeTgzMjLsMeb8xdAu8YFojTCFjPJKblNlWf+lTpV5z8rBCHIraYBRQ9Xtoi3XL70S0U+k6YSqZpRX9vgfK/CVIVbw66gBIzcMAHB4Ryt6jeYXAAAAAElFTkSuQmCC </code>
<br/>
<button id="import"> Import from URL </button>
<button id="convert"> Convert to URL </button>
<br/>
<canvas id="imageCanvas"></canvas>
There seems to be some confusion here, and given how misleading your links are I can understand.
Tainted canvas
"Tainting the canvas" is a security operation which blocks .toDataURL() and any other exporting method like .toBlob(), .captureStream() or 2D context's .getImageData().
There are only a few cases where this operation is done:
Cross-origin resources: That's the most common on the web. Site A drew a resource like an image from Site B on a canvas. If Site B didn't tell the browser that it allows Site A to read this content by passing an appropriate Allow-Origin headers, then the browser has to "taint" the canvas.
This only protects the resource. There is no real security added to Site A in that case.
Information leakage: That's more of an exception, but still it's a thing. Browsers may decide on their own that some actions could leak privacy information about their user. For instance the most common case is to "taint" the canvas when an SVG image containing a <foreigObject> is painted on the canvas. Since this tag can render HTML, it can also leak what link has been visited for instance. Browsers should take care of anonymizing these resources, but nevertheless, Safari still does taint any such SVG image, Chrome buggily still does taint the ones served from a blob: URI, IE did taint any SVG image (not only the ones with <foreignObject>), and all did at some point taint the canvas when using some externals filter.
Information leakage II: There is also something that no browser can fight against when reading a canvas generated bitmap. Every hardware and software will produce slightly different results when asked to perform the same drawing operations. This can be used to finger-print the current browser. Some browser extensions will thus block these methods too, or make it return dummy results.
Now, none of this really protects from malicious images.
Images embedding malicious code
The kind of images that can embed malicious code are generally exploiting vulnerabilities in the image parsers and renderers. I don't think any up to date such parser or renderer is still vulnerable to such attacks, but even though there was one, which would be used by a web browser, then when it's been drawn to the canvas, it's already too late. Tainting the canvas would not protect anything.
One thing you may have heard about is stegosploit. This consists in hiding malicious code in the image, but the HTML canvas there was used to decode that malicious code. So if you don't have the script to extract and execute the malicious script embedded, it doesn't represent much a risk, and actually, if you do reexport it, there are good chances that this embedded data gets lost.
Risks inherent with uploading content to a server
There are many risks when uploading anything to your server. I can't stress it out enough but Read OWASP recommendations carefully.
Particular risks when uploading a data: URL
data: URLs are a good vector for XSS attacks. Indeed, it is very likely that you will build HTML code directly using that data: URL. If you didn't apply the correct sanitization steps, you may very well load an attacker's script instead of an image:
const dataURIFromServer = `data:image/png,"' onerror="alert('nasty script ran')"`;
const makeImgHTML = ( uri ) => `<img src="${uri}">`;
document.getElementById('container').innerHTML = makeImgHTML(dataURIFromServer);
<div id="container"></div>
A final word on data: URLs
data: URLs are a mean to store data in an URL so that it can be passed directly without the need for a server.
Storing a data: URL to a server is counter-productive.
To represent binary data, this data needs to be encoded in base64 so that all unsafe characters can still be represented in most encodings. This operation will cause a grow of about 34% of the original data size, and you will have to store this as a String, which is not convenient for most data bases.
Really, data: URLs are from an other era. There is really little cases where you want to use it. Most of what you want to do with a data: URL, you should do it with a Blob and a blob: URL. For instance, upload your image as Blob directly to your server. Use the canvas .toBlob() method if you need to export its content. Use img.src = URL.createObjectURL(file) if you want to present an image picked by your user.
TL;DR
- In your scenario toDataURL() in itself will not create any risk, nor will it prevent any.
- Use the well-known techniques to sanitize your users' uploads (don't trust them ever and remember they may not even be using your UI to talk to your server).
- Avoid data: URLs. They are inefficient.
Related
I'm trying to figure out how to call the File Save As Command in Firefox
(the one you get when you right click an image and save it) to save an image using JS (or if there is something else I can use, I would be grateful if you pointed me in that direction). I am looking for an example of how to open the Save As menu and pre-fill the file name field ... I've been searching furiously and have come up with zip. In my search I saw that you cannot directly save a file to disk, but is it impossible to call the save as function? Any suggestions would be greatly appreciated!
Edit:
I'm not looking to make this code available to everyone, and the java script is client side, I'm just writing a small script to make saving photos a little easier in terms of naming them.
-Will
No you can't do this, and really you are trying to find a solution in a way that does not embrace the internet and the way people interact with content. What you are trying to do is call on Operating System operation from Javascript. If there were anyway this would be possible, I don't think it is at all, it would be a very poor solution. Think about all the different Operating Systems Firefox is being used on. If you found a solution for Windows 7, what about an Apple Mac running Firefox?
What you should consider is that a User decides whether to Save something to their computer, not the programmer of the application. Provide a link to the file, most users know how to right click a link and select Save As. Add help tip explaining what to do as well.
To give a File a specific name or even start an automatic download when a User clicks or takes some kind of action, you can create a response from your server that is a PDF,Excel,Jpeg,Doc,Docx or many other files types. The server can load the file in memory and sent it as a response with the proper header information in the response.
For example to set a specific name for the file when the user downloads you can set your Response header with something like:
header('Content-Disposition: attachment; filename="downloaded.pdf"');
You can use the anchor element's download attribute to specify that a link is to be downloaded. Note that this is not implemented in all browsers, Chrome, Firefox, and Opera currently support it
https://developer.mozilla.org/en-US/docs/Web/API/HTMLAnchorElement
HTMLAnchorElement.download
Is a DOMString indicating that the linked
resource is intended to be downloaded rather than displayed in the
browser. The value represent the proposed name of the file. If the
name is not a valid filename of the underlying OS, browser will adapt
it. The value is a URL with a scheme like http:, file:, data: or even
blob: (created with URL.createObjectURL).
Demo
var canvas = document.getElementById('canvas');
var ctx = canvas.getContext('2d');
ctx.fillRect(25,25,100,100);
ctx.clearRect(45,45,60,60);
ctx.strokeRect(50,50,50,50);
var link = document.getElementById("link");
//Set href to the data url that you want downloaded
link.href = "http://placehold.it/350x350";
//set download to the default filename you want to use
link.download = "image.png";
<canvas id="canvas" width="150" height="150"></canvas>
Click to download
You can also specify a regular url to a file, but note that if the server sends a filename header: Content-Disposition ... filename... that it will overwrite whatever you have in the download attribute.
I'm making a HTML/Javascript tool that does something with images. Frankly, I just use HTML because Java or C++ GUI is way too complicated and not worth the effort.
The tool just displays an image and then displays percentual and absolute coordinates as you hover the image:
My aim is to create a little GUI for "Last images" allowing to quickly display recent image. And because the application is client-only, I can't use any kind of server storage. There's a localStorage object, but to me, it seems like a bad idea to store several MB of images in it.
What do you think? Should I use localStorage for that? Or is there an alternative? Or maybe the server cooperation is totally necessary?
You can use a compression algorithm to compress the base64 string representation of images. I used the similar approach in my code and it alowed me to store more images compared to normal case.
Compression lib: LZstring http://pieroxy.net/blog/pages/lz-string/index.html
It's either that or using the HTML5 File API, but if you are just saving a few images under 5mb HTML5 WebStorage should be enough, like this:
If you use the web storage, and HTML5 Canvas to convert the image into text (data uri):
var img = new Image();
img.src = "same_origin.png";
localStorage.setItem("lastImage",imgToURI(img));
function imgToURI(img){
var canvas = document.createElement("canvas");
var context = canvas.getContext("2d");
canvas.width = img.width;
canvas.height = img.height;
context.drawImage(img,0,0);
return canvas.toDataURL();
}
And then for retrieving the saved image:
var last = new Image();
last.src = localStorage.getItem("lastImage1");
Note: Most browsers only allow 5mb of local storage.
Note: While localStorage is highly supported, canvas it's not supported in IE8.
Note: The HTML File API approach wouldn't be user friendly, as of now Chrome is the only one allowing writing files with their filesystem api, for the other ones you'll have to convert the images to data uri, then make a link with all the data in the form of a data uri text file, and have the user download it and reload it every time.
Hope it helps!
My program Precomp can be used to further compress already compressed file formats like GIF, PNG, PDF, ZIP and more. Roughly summarized, it does this by decompressing the compressed streams, recompressing them and storing the differences between the expected compressed stream and the actual compressed stream. As an example, this rotating earth picture from Wikipedia is compressed from 1429 KB to 755 KB. The process is lossless, so the original GIF file can be restored.
The algorithm for the GIF file format can be isolated and implemented relatively easy, so I was thinking about a proof-of-concept implementation in JavaScript. This would lead to the web server sending a compressed version of the GIF file (.pcf ending, essentially a bzip2 compressed file of the
GIF image contents) and the client decompressing the data, recompressing to GIF and displaying it. The following things would've to be done:
The web site author would've to compress his GIF files using the standard version of Precomp and serve these instead of the GIF files together with a JavaScript for the client side recompression.
The client would decompress the bzip2 compressed file, this could be done using one of the existing bzip2 Javascript implementations.
The client would recompress the image content into the original GIF file.
The process is trade of bandwidth against CPU usage on the client side.
Now my questions are the following:
Are there any general problems with the process of loading a different file and "converting" it to GIF?
What would you recommend to display before the client side finishes (image placeholder)?
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
Let's begin by answering your specific questions. Code example below.
Q&A
Are there any general problems with the process of loading a different file and "converting" it to GIF?
The main problem is complication. You are effectively writing a browser addon, like those for JPEG2000.
If you are writing real browser addons, each major browsers do it differently, and change addon formats occasionally, so you have to actively maintain them.
If you are writing a JS library, it will be easier to write and maintain, but it will be unprivileged and suffer from limitations such as cross original restriction.
What would you recommend to display before the client side finishes (image placeholder)?
Depends on what your format can offer.
If you encode the image dimension and a small thumbnail early, you can display an accurate place-holder pretty early.
It is your format, afterall.
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Nothing different from other files.
Configure the Expires and Cache-Control header on server side and they will be cached.
Manifest and prefetch can also be used.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
This is tricky. When JavaScript is disabled, you can only replace elements, not attributes.
This means you cannot create an image somewhere that points to the .pcf files, and ask browser to rewrite the src attribute when JS is unavailable.
I think the best solution to support no JS is outputting the images with document.write, using noscript as fall back:
<noscript>
<img src=demo.gif width=90>
</noscript><script>
loadPcf("demo.pcf","width=90")
</script>
(Some library or framework may make you consider <img src=demo.gif data-pcf=demo.pcf>.
This will not work for you, because browsers will preload 'demo.gif' before your script kicks in, causing additional data transfer.)
Alternatively, browser addons are unaffected by "disable JS" settings, so if you make addons instead then you don't need to worry about it.
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Perhaps. You can code a user interface and store the preference in cookie or in localStorage.
Then you can detect preference and switch the logic in server code (if cookie) or in client code.
If you are doing addons, all browsers provide reliable preference mechanism.
The problem is that, again, every browser do it differently.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
If you hands browsers a partial image, they may think the image is corrupted and refuse to show it.
In this case you have to implement your own GIF decoder AND encoder so that you can hands browser a complete placeholder image, just to be safe.
(new) Can I decode image loaded from another site?
I must also repeat the warning that non-addon JS image decoding does not work with cross origin images.
This means, all .pcf files must be on the same server, same port, and same protocol with the site using it.
For example you cannot share images for multiple sites or do optimisations like domain sharding.
Code Example
Here is a minimal example that creates an <img>, loads a gif, half its width, and put it back to the <img>.
To support placeholder or progressive loading, listen to onprogress instead of/in addition to onload.
<!DOCTYPE html><html><head><meta charset="UTF-8"><script>
function loadPcf( file, attr ) {
var atr = attr || '', id = loadPcf.autoid = 1 + ~~loadPcf.autoid;
document.write( '<img id=pcf'+id+' ' + atr + ' />' );
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer'; // IE 10+ only, sorry.
xhr.onload = function() { // I am loading gif for demo, you can load anything.
var data = xhr.response, img = document.querySelector( '#pcf' + id );
if ( ! img || ! data instanceof ArrayBuffer ) return;
var buf = new DataView( data ), head = buf.getUint32(0), width = buf.getUint16(6,1);
if ( head !== 1195984440 ) return console.log( 'Not a GIF: ' + file ); // 'GIF8' = 1195984440
// Modify data, image width in this case, and push it to the <img> as gif.
buf.setInt16( 6, ~~(width/2), 1 );
img.src = URL.createObjectURL( new Blob( [ buf.buffer ], { type: "image/gif" } ) );
};
xhr.open( 'GET', file );
xhr.send();
}
</script>
<h1>Foo<noscript><img src=a.gif width=90></noscript><script>loadPcf("a.gif","width=90")</script>Bar</h1>
If you don't need <noscript> compatibility (and thus prevent facebook/google+ from seeing the images when the page is shared), you can put the pcf file in <img> src and use JS to handle them en mass, so that you don't need to call loadPcf for each image and will make the html much simpler.
How about <video>?
What you envisioned is mostly doable, in theory, but perhaps you should reconsider.
Judging from the questions you ask, it will be quite difficult for you to define and pull off your vision smoothly.
It is perhaps better to encode your animation in WebM and use <video> instead.
Better browser support back to IE 9 - just add H.264 to make it a dual format video. You need IE 10+ to modify binary data.
Size: Movies has many, many options and tricks to minimise size, and what you learned can be reused in the future.
Progressive: <video> have had some techs for adaptive video, and hopefully they will stabilise soon.
JavaScript: <video> does not depend on JavaScript.
Future-proof: Both WebM and H.264 will be supported by many programs, long after you stopped working on your special format.
Cost-effective: Create a low-bandwith option using smaller or lower quality media is easier and more reliable than a custom format. This is why wikipedia and youtube offers their media in different resolutions.
For non-animations, PNG can also be colour indexed and 7z optimised (keeping the PNG format).
Icon size indexed PNG is often smaller than the same GIF.
Or perhaps your vision (as described in the pcf website) is the capability to encode many different files, not only GIF.
This will be more like supporting a new network protocol, and is likely beyond the scope of humble JavaScript. (e.g. how are you going to handle pdf download or streaming?)
First, a little background:
I apologize ahead of time for the long-winded nature of this preface; however, it may assist in providing an alternate solution that is not specific to the nature of the question.
I have an ASP.NET MVC application that uses embedded WinForm UserControls. These control provide "ink-over" support to TabletPCs through the Microsoft.Ink library. They are an unfortunate necessity due to an IE8 corporate standard; otherwise, HTML5 Canvas would be the solution.
Anyway, an image URL is passed to the InkPicture control through a <PARAM>.
<object VIEWASEXT="true" classid="MyInkControl.dll#MyInkControl.MyInkControl"
id="myImage" name="myImage" runat="server">
<PARAM name="ImageUrl" value="http://some-website/Content/images/myImage.png" />
</object>
The respective property in the UserControl takes that URL, calls a method that performs an HttpWebRequest, and the returned image is placed in the InkPicture.
public Image DownloadImage(string url)
{
Image _tmpImage = null;
try
{
// Open a connection
HttpWebRequest _HttpWebRequest = (HttpWebRequest)HttpWebRequest.Create(url);
_HttpWebRequest.AllowWriteStreamBuffering = true;
// use the default credentials
_HttpWebRequest.Credentials = CredentialCache.DefaultCredentials;
// Request response:
System.Net.WebResponse _WebResponse = _HttpWebRequest.GetResponse();
// Open data stream:
System.IO.Stream _WebStream = _WebResponse.GetResponseStream();
// convert webstream to image
_tmpImage = Image.FromStream(_WebStream);
// Cleanup
_WebResponse.Close();
}
catch (Exception ex)
{
// Error
throw ex;
}
return _tmpImage;
}
Problem
This works, but there's a lot of overhead in this process that significantly delays my webpage from loading (15 images taking 15 seconds...not ideal). Doing Image img = new Bitmap(url); in the UserControl does not work in this situation because of FileIO Permission issues (Full trust or not, I have been unsuccessful in eliminating that issue).
Initial Solution
Even though using canvas is not a current option, I decided to test a solution using it. I would load each image in javascript, then use canvas and toDataUrl() to get the base64 data. Then, instead of passing the URL to the UserControl and have it do all the leg work, I pass the base64 data as a <PARAM> instead. Then it quickly converts that data back to the image.
That 15 seconds for 15 images is now less than 3 seconds. Thus began my search for a image->base64 solution that worked in IE7/8.
Here are some additional requirements/restrictions:
The solution cannot have external dependencies (i.e. $.getImageData).
It needs to be 100% encapsulated so it can be portable.
The source and quantity of images are variable, and they must be in URL format (base64 data up front is not an option).
I hope I've provided sufficient information and I appreciate any direction you're able to give.
Thanks.
You can use any of the FlashCanvas, fxCanvas or excanvas libraries, which simulate canvas using Flash or VML in old internet explorer versions. I believe all of these provide the toDataURL method from the Canvas API, allowing you to get an encoded representation of your image.
After extensive digging around (I'm in the same fix) I believe this is the only solution, short of writing a PHP script that you can send the image to. The problem with that of course is that there isn't a way to send images to the PHP script unless any of these three conditions is true:
The browser supports typed arrays (Uint8Array)
The browser supports sendAsBinary
The image is being uploaded by someone via a form (in which case it can be sent to a PHP script that responds with the base 64 encoding)
I'm currently developing a solution for a web-to-print, poster printing application.
One of features I'd like to include is the ability to 'edit' (crop/scale/rotate) a given image, before proceeding to order a poster of said image.
To avoid the requirement of the user uploading the image to a remote server before editing, I'd like to know the following:
Is it possible (using JavaScript) to load an image stored on the client machine into the browser / browser memory for editing, without uploading the image to a remote server? And if so, how is this done?
Thanks,
BK
The image can be edited without the user needed to upload the image to the server.
Take a look at the link below. It can be done quite easily.
Reading local files using Javascript
Yes you can! But in order to do it the browser must support Local storage! It is HTML5 api so most modern browsers will be able to do it! Remember that localstorage can save only string data so you must change images into a blob string. YOur image source will look something like this
This is a short snippet that will help you!
if(typeof(Storage)!=="undefined"){
// here you can use the localstorage
// is statement checks if the image is in localstorage as a blob string
if(localStorage.getItem("wall_blue") !== null){
var globalHolder = document.getElementById('globalHolder');
var wall = localStorage.getItem('wall_blue');
globalHolder.style.backgroundImage= "url("+wall+")";
}else{
// if the image is not saved as blob in local storage
// save it for the future by the use of canvas api and toDataUrl method
var img = new Image();
img.src = 'images/walls/wall_blue.png';
img.onload = function () {
var canvas = document.createElement("canvas");
canvas.width =this.width;
canvas.height =this.height;
var ctx = canvas.getContext("2d");
ctx.drawImage(this, 0, 0);
var dataURL = canvas.toDataURL();
localStorage.setItem('wall_blue', dataURL);
};
}}else{//here you upload the image without local storage }
Hope you will find this short snippet useful. Remember Localstorage saves only string data so you cant
Oh and by the way if you are using a jcrop for cropping the images you have to save the blob code from image to the form and send it to the server manually because jcrop only handles images as a file not as a base64 string.
Good luck! :D
Using Html/Javascript you can only select files using the file upload html component (I think Flash / Silverlight wrap this to make things easier but its still sandboxed)
You can however use Java Applets (orwhatever they are called these days), Native ActiveX controls or .Net Controls to provide additional functionality (this hase security implications and required VM/Runtimes Frameworks etc)
Adobe Air or other client side technology might work, but looks like you want to do this in JavaScript. In this case, uploading the file to the server and manipulating from there is the best bet.
*[EDIT]
Since 2010, since this question was answered, technology has moved on, html now has the ability to create and manipulate within the browser. see newer answers or these examples:
http://davidwalsh.name/resize-image-canvas
http://deepliquid.com/content/Jcrop.html
*