I have lightbox gallery with download button on each image. I have no problem with the download on chrome but I have problems with IE. That is why I am doing it with canvas like this:
if (browser() === 'IE') {
var canvas = document.getElementById("canvas"),
ctx = canvas.getContext("2d");
var img = $('.lb-image')[0];
imgWidth = $(img).prop('naturalWidth');
imgHeight = $(img).prop('naturalHeight');
canvas.width = imgWidth;
canvas.height = imgHeight;
ctx.drawImage(img, 0, 0, imgWidth,imgHeight,0, 0, canvas.width, canvas.height);
window.navigator.msSaveBlob(canvas.msToBlob(), imgName);
Yesterday I saw that when you download a picture through IE browser, the image size is larger than the uploaded one or if it is downloaded via chrome. I inspected the images info and I saw that the only difference is the bit depth. It is larger on the IE's canvas downloaded image.
How can I manually set the bit depth or is there a better approach ?
Drawing an Image on a canvas will convert this Image from whatever format to raw 24bits of RGB + 8bits Alpha (8bits depth). Currently there is no official option to set it yourself.
All you can do is to choose which compression (jpeg, png, webp) will be used when exporting the canvas, but this compression is made on this 8bits depth raw data anyway. So whatever you do, drawing on a canvas is loosy and the result will have nothing to do with the original Image file anymore.
But anyway, your workaround is not the correct one.
Your original problem is that you want to enable the <a href="someURL" download="myFile.png"> in IE.
Instead of drawing the image on a canvas, request it through ajax as an Blob. Then you'll be able to use navigator.msSaveBlob easily :
if (browser() === 'IE') {
var xhr = new XMLHttpRequest();
xhr.open('get', img.src);
xhr.responseType = 'blob';
xhr.onload = function(){
window.navigator.msSaveBlob(xhr.response, imgName);
}
xhr.send();
}
With this code, what you will download through msSaveBlob is the real file stored on the server, just like <a download>.
Important note
This will obviously work only with same-origin resources, just like <a download> and even like the canvas workaround.
Related
I'm toying around with making a super simple HTML Canvas crop tool. The first thing I tested was to see if the output image would be perceptually identical to the input image.
Using this image as a source, canvas fails to maintain the smooth gradients as you can see in the image comparison I posted here (still visible despite the imgur compression). You can also replicate it in any online photo editor such as https://pixlr.com.
Is there some way to fix this?
Code snippet I am using:
const loadImageToCanvas = (file) => { // file is from input.files
const img = new Image();
img.onload = () => {
const { width, height } = img;
canvas.width = width;
canvas.height = height;
ctx.clearRect(0, 0, width, height);
ctx.drawImage(img, 0, 0);
}
img.src = URL.createObjectURL(file);
};
Two words: gamma correction. Your PNG file has a gAMA chunk of 1.0000. Web browsers are (correctly) using this information to adjust the displayed pixels for an output device having the standard sRGB gamma of 2.2. This behaviour is the same for both <canvas> and <img> elements.[1]
I don't know what viewer or conversion tool you are using to produce your imgur image, but it is either stripping or ignoring the gamma chunk.
If your image is in fact encoded with a gamma of 2.2 (and thus the gamma chunk is erroneous), you can remove the chunk with:
pngcrush -rem gAMA 1024.png 1024.nogamma.png
[1] The spec mandates this consistency. Are you really seeing different behaviour between your (correct, although using createObjectURL is unnecessary and a bad idea) code and an <img> tag?
So, I have a user input which serve to upload pictures. This is a simple example:
function handleImage(e){
var reader = new FileReader();
reader.onload = function(event){
var img = new Image();
img.onload = function(){
console.log (img);
}
img.src = event.target.result;
}
reader.readAsDataURL(e.target.files[0]);
}
<input type="file" onchange="handleImage(event)"><br>
As you can see, I display an Image () on the console. I want to convert this Image into a jpg file.
I understood how to get the pixels of the picture but it's completely crazy to send it to a server: It's to large!
I also tried to access to the jpg file stored in the computer but I do not achieve to anything.
The only way I found is to send it with a form like this:
<form action="anything.php" method="post" enctype="multipart/form-data">
<input type="file" name="fileToUpload" id="fileToUpload">
</form>
And in PHP:
$_FILES["fileToUpload"]["tmp_name"]
Why not with JS?
My final goal is to send the jpg file with AJAX.
Tell me if you have some questions.
The simplest way is to use a canvas element and then invoke a download action allowing the user to select where to save the image.
You mention that the image is large, but not how much - be aware that with canvas you will also run into restrictions when the image source starts to touch around the 8k area in pixel size.
A simplified example (IE will require a polyfill for toBlob()).:
Load image source via input
Use File blob directly as image source via URL.createObjectURL()
When loaded, create a temporary canvas, set canvas size = image size and draw in image
Use toBlob() (more efficient on memory and performance and require no transcoding to/from Base64) to obtain a Blob.
We'll convert the Blob to File (a subset object and it will reference the same memory) so we can also give a filename as well as (important!) a binary mime-type.
Since the mime-type for the final step is binary the browser will invoke a Save as dialog.
document.querySelector("input").onchange = function() {
var img = new Image;
img.onload = convert;
img.src = URL.createObjectURL(this.files[0]);
};
function convert() {
URL.revokeObjectURL(this.src); // free up memory
var c = document.createElement("canvas"), // create a temp. canvas
ctx = c.getContext("2d");
c.width = this.width; // set size = image, draw
c.height = this.height;
ctx.drawImage(this, 0, 0);
// convert to File object, NOTE: we're using binary mime-type for the final Blob/File
c.toBlob(function(blob) {
var file = new File([blob], "MyJPEG.jpg", {type: "application/octet-stream"});
window.location = URL.createObjectURL(file);
}, "image/jpeg", 0.75); // mime=JPEG, quality=0.75
}
// NOTE: toBlob() is not supported in IE, use a polyfill for IE.
<label>Select image to convert: <input type=file></label>
Update: If you just are after a string (base-64 encoded) version of the newly created JPEG simply use toDataURL() instead of toBlob():
document.querySelector("input").onchange = function() {
var img = new Image;
img.onload = convert;
img.src = URL.createObjectURL(this.files[0]);
};
function convert() {
URL.revokeObjectURL(this.src); // free up memory
var c = document.createElement("canvas"), // create a temp. canvas
ctx = c.getContext("2d");
c.width = this.width; // set size = image, draw
c.height = this.height;
ctx.drawImage(this, 0, 0);
// convert to File object, NOTE: we're using binary mime-type for the final Blob/File
var jpeg = c.toDataURL("image/jpeg", 0.75); // mime=JPEG, quality=0.75
console.log(jpeg.length)
}
<label>Select image to convert: <input type=file></label>
JavaScript on the client side can not save files.
You have multiple options:
Render the image on an <canvas> element. This way it can be saved with right click -> save image
Inset the image as <img> element. This way it can be saved with right click -> save image
Send the image data as base64 string to the server. Do the processing there
Use a server side language like PHP or Node.js to save the file
Long story short, you have to use some server side logic to save the file on disk
I'm trying to produce the same base64 data for an image file in both JavaScript and in Ruby. Unfortunately both are outputting two very different values.
In Ruby I do this:
Base64.encode64(File.binread('test.png'));
And then in JavaScript:
var image = new Image();
image.src = 'http://localhost:8000/test.png';
$(image).load(function() {
var canvas, context, base64ImageData;
canvas = document.createElement('canvas');
context = canvas.getContext('2d');
canvas.width = this.width;
canvas.height = this.height;
context.drawImage(this, 0, 0);
imageData = canvas.toDataURL('image/png').replace(/data:image\/[a-z]+;base64,/, '');
console.log(imageData);
});
Any idea why these outputs are different?
When you load the image in Ruby the binary file without any modifications will be encoded directly to base-64.
When you load an image in the browser it will apply some processing to the image before you will be able to use it with canvas:
ICC profile will be applied (if the image file contains that)
Gamma correction (where supported)
By the time you draw the image to canvas, the bitmap values has already been changed and won't necessarily be identical to the bitmap that was encoded before loading it as image (if you have an alpha channel in the file this may affect the color values when drawn to canvas - canvas is a little peculiar at this..).
As the color values are changed the resulting string from canvas will naturally also be different, before you even get to the stage of re-encoding the bitmap (as PNG is loss-less the encoding/compressing should be fairly identical, but factors may exist depending on the browser implementation that will influence that as well. to test, save out a black unprocessed canvas as PNG and compare with a similar image from your application - all values should be 0 incl. alpha and at the same size of course).
The only way to avoid this is to deal with the binary data directly. This is of course a bit overkill (in general at least) and a relative slow process in a browser.
A possible solution that works in some cases, is to remove any ICC profile from the image file. To save an image from Photoshop without ICC choose "Save for web.." in the file menu.
The browser is re-encoding the image as you save the canvas.
It does not generate an identical encoding to the file you rendered.
So I actually ended up solving this...
Fortunately I am using imgcache.js to cache images in the local filesystem using the FileSystem API. My solution is to use this API (and imgcache.js makes it easy) to get the base64 data from the actual cached copy of the file. The code looks like this:
var imageUrl = 'http://localhost:8000/test.png';
ImgCache.init(function() {
ImgCache.cacheFile(imageUrl, function() {
ImgCache.getCachedFile(imageUrl, function(url, fileEntry) {
fileEntry.file(function(file) {
var reader = new FileReader();
reader.onloadend = function(e) {
console.log($.md5(this.result.replace(/data:image\/[a-z]+;base64,/, '')));
};
reader.readAsDataURL(file);
});
});
});
});
Also, and very importantly, I had to remove line breaks from the base64 in Ruby:
Base64.encode64(File.binread('test.png')).gsub("\n", '');
I wanted to know if there was anyone out there that knows how
canvas.toDataURL("image/png");
works? I want to understand better because at times it seems to really slow my computer down.
Is there a way to optimize the base64 image before during or after to get better performance ?
function base64(url) {
var dataURL;
var img = new Image(),
canvas = document.createElement("canvas"),
ctx = canvas.getContext("2d"),
src = url;
img.crossOrigin = "Anonymous";
img.onload = function () {
canvas.height = img.height;
canvas.width = img.width;
ctx.drawImage(img, 0, 0);
var dataURL = canvas.toDataURL('image/png');
preload(dataURL);
canvas = null;
};
img.src = url;
}
Basically this is my function but I wanted to see if there was a way to make this process perform better or if there was an alternative to canvas.toDataURL('image/png');
thanks
toDataURL() does the following when called (synchronously):
Creates a file header based on the file type requested or supported (defaults to PNG)
Compresses the bitmap data based on file format
Encodes the resulting binary file to Base-64 string
Prepends the data-uri header and returns the result
When setting a data-uri as source (asynchronously):
String is verified
Base-64 part is separated and decoded to binary format
Binary file verified then parsed and uncompressed
Resulting bitmap set to Image element and proper callbacks invoked
These are time-consuming steps and as they are internal we cannot tap into them for any reason. As they are pretty optimized as they are given the context they work in there is little we can do to optimize them.
You can experiment with different compression schemes by using JPEG versus PNG. They are using very different compression techniques and depending on the image size and content one can be better than the other in various situations.
My 2 cents..
The high performance alternative is canvas.toBlob. It is extremely fast, asynchronous, and produces a blob which can also be swapped to disk, and is subjectly speaking simply far more useful.
Unfortunately it is implemented in Firefox, but not in chrome.
Having carefully bench-marked this, there is no way around because canvas.toDataURL itself is the bottleneck by orders of magnitude.
In Javascript you have the ByteArray type and some views on it as described here:
https://developer.mozilla.org/en-US/docs/JavaScript/Typed_arrays
is it possible to store image data in such bytes and if yes, how can i display such an image? png or jpg?
Yes, you can store an image using the typed arrays.
Im not sure what you want actually.
If you want to create an HTMLImageElement from a ByteArray, you can do something similar as cited in here and here.
If you want to get the bytes from an Image, that would be trickier. You can draw the ImageElement to HTML canvas, and them get the URI back using toDataURL.
I just tried to get the data using canvas and it worked.
var myCanvas = document.getElementById('my_canvas_id');
var ctx = myCanvas.getContext('2d');
var img = new Image;
img.onload = function(){
myCanvas.width = img.width;
myCanvas.height = img.height;
ctx.drawImage(img,0,0); // Or at whatever offset you like
alert(myCanvas.toDataURL());
};
img.src = "logo4w.png";
Although, the method toDataURL() do not allow you to perform this operation if the image you draw on canvas comes from outside the website domain.