angularjs compress image before upload - javascript

I'm buliding a web site for mobile devices, that uses angular-file-upload.min.js for uploading images from a mobile device image library.
html code:
<div>
<div class="rating-camera-icon">
<input type="file" accept="image/*" name="file" ng-file-
select="onFileSelect($files)">
</div>
<img ng-show="fileName" ng-src="server/{{fileName}}" width="40"
style="margin-left:10px">
</div>
code:
$scope.onFileSelect = function($files) {
for (var i = 0; i < $files.length; i++) {
var file = $files[i];
if (!file.type.match(/image.*/)) {
// this file is not an image.
};
$scope.upload = $upload.upload({
url: BASE_URL + 'upload.php',
data: {myObj: $scope.myModelObj},
file: file
}).progress(function(evt) {
// console.log('percent: ' + parseInt(100.0 * evt.loaded / evt.total));
// $scope.fileProgress = evt.loaded / evt.total * 100.0;
}).success(function(data, status, headers, config) {
// file is uploaded successfully
$scope.fileName = data;
});
}
};
The upload is very slow in mobile devices. How can I compress the file?

Stringifying the image into a base-64 text format is all fine and well, but it will take a small amount of time and certainly does not compress it. In fact it will likely be noticeably larger than the raw image. Unfortunately your browser will also not gzip an uploads. They can of course handle gzipped downloads. You could certainly try to do a gzip of the text itself using some pure JS solution. Looking on github you can find such things - https://github.com/beatgammit/gzip-js However, that will take some time as well and there is no guarantee that the compressed text version of the image is any smaller than the raw JPEG you attach.
A native mobile app might decide to use some native code JPEG or PNG optimization before sending (basically resample the image) if appropriate, but doing this out in JavaScript seems potentially problematic at this point in time. Given Atwood's law (of writing everything eventually in JavaScript) it certainly could be done but at this point in mid-2014 it isn't.

You could try to store the image on a canvas, then convert to data64 and then upload the data string.
I made kind of this in a POC, theres a bug in ios regarding large images as the one you could take with the camera when in canvas, but the overal works nice... something like;
file = files[0];
try {
var URL = window.URL || window.webkitURL,
imgURL = URL.createObjectURL(file);
showPicture.src = imgURL;
imgBlobToStore = imgURL;
if(AppData.supports_html5_storage()) {
var canvas = document.getElementById('storingCanvas') ,
ctx = canvas.getContext('2d'),
img = new Image(),
convertedFile;
img.src = imgBlobToStore;
img.onload = function () {
canvas.width = img.width;
canvas.height= img.height;
ctx.drawImage(img, 0,0,img.width, img.height);
convertedFile = canvas.toDataURL("image/jpeg"); //or png
//replace with angular storage here
localStorage.setItem( $('.pic').attr('id'), convertedFile);
};
},
}

There are several libraries that do this for you on the client side.
https://github.com/oukan/angular-image-compress
https://github.com/sammychl/ng-image-compress
http://angularscript.com/client-side-image-compress-directive-with-angular/

As an alternative to a programmatic solution - if your image is being created by the device camera for upload, then why not simply change the resolution of the camera. The smallest resolution may be 10x smaller than the largest, and this may be suitable for many situations.

Related

JS-Convert an Image object to a jpeg file

So, I have a user input which serve to upload pictures. This is a simple example:
function handleImage(e){
var reader = new FileReader();
reader.onload = function(event){
var img = new Image();
img.onload = function(){
console.log (img);
}
img.src = event.target.result;
}
reader.readAsDataURL(e.target.files[0]);
}
<input type="file" onchange="handleImage(event)"><br>
As you can see, I display an Image () on the console. I want to convert this Image into a jpg file.
I understood how to get the pixels of the picture but it's completely crazy to send it to a server: It's to large!
I also tried to access to the jpg file stored in the computer but I do not achieve to anything.
The only way I found is to send it with a form like this:
<form action="anything.php" method="post" enctype="multipart/form-data">
<input type="file" name="fileToUpload" id="fileToUpload">
</form>
And in PHP:
$_FILES["fileToUpload"]["tmp_name"]
Why not with JS?
My final goal is to send the jpg file with AJAX.
Tell me if you have some questions.
The simplest way is to use a canvas element and then invoke a download action allowing the user to select where to save the image.
You mention that the image is large, but not how much - be aware that with canvas you will also run into restrictions when the image source starts to touch around the 8k area in pixel size.
A simplified example (IE will require a polyfill for toBlob()).:
Load image source via input
Use File blob directly as image source via URL.createObjectURL()
When loaded, create a temporary canvas, set canvas size = image size and draw in image
Use toBlob() (more efficient on memory and performance and require no transcoding to/from Base64) to obtain a Blob.
We'll convert the Blob to File (a subset object and it will reference the same memory) so we can also give a filename as well as (important!) a binary mime-type.
Since the mime-type for the final step is binary the browser will invoke a Save as dialog.
document.querySelector("input").onchange = function() {
var img = new Image;
img.onload = convert;
img.src = URL.createObjectURL(this.files[0]);
};
function convert() {
URL.revokeObjectURL(this.src); // free up memory
var c = document.createElement("canvas"), // create a temp. canvas
ctx = c.getContext("2d");
c.width = this.width; // set size = image, draw
c.height = this.height;
ctx.drawImage(this, 0, 0);
// convert to File object, NOTE: we're using binary mime-type for the final Blob/File
c.toBlob(function(blob) {
var file = new File([blob], "MyJPEG.jpg", {type: "application/octet-stream"});
window.location = URL.createObjectURL(file);
}, "image/jpeg", 0.75); // mime=JPEG, quality=0.75
}
// NOTE: toBlob() is not supported in IE, use a polyfill for IE.
<label>Select image to convert: <input type=file></label>
Update: If you just are after a string (base-64 encoded) version of the newly created JPEG simply use toDataURL() instead of toBlob():
document.querySelector("input").onchange = function() {
var img = new Image;
img.onload = convert;
img.src = URL.createObjectURL(this.files[0]);
};
function convert() {
URL.revokeObjectURL(this.src); // free up memory
var c = document.createElement("canvas"), // create a temp. canvas
ctx = c.getContext("2d");
c.width = this.width; // set size = image, draw
c.height = this.height;
ctx.drawImage(this, 0, 0);
// convert to File object, NOTE: we're using binary mime-type for the final Blob/File
var jpeg = c.toDataURL("image/jpeg", 0.75); // mime=JPEG, quality=0.75
console.log(jpeg.length)
}
<label>Select image to convert: <input type=file></label>
JavaScript on the client side can not save files.
You have multiple options:
Render the image on an <canvas> element. This way it can be saved with right click -> save image
Inset the image as <img> element. This way it can be saved with right click -> save image
Send the image data as base64 string to the server. Do the processing there
Use a server side language like PHP or Node.js to save the file
Long story short, you have to use some server side logic to save the file on disk

Javascript Client Side JPEG image compression

I want to basically compress images uploaded (client-side) and then attach the src to my id (of html). The code of the function is as below:
function readURL(input) {
if (input.files && input.files[0]) {
var reader = new FileReader();
reader.onload = function (e) {
console.log("e.target.result",e.target.result);
var imageEle = new Image();
imageEle.src = e.target.result;
imageEle.onload = function() {
var cvs = document.createElement('canvas');
cvs.width = imageEle.naturalWidth;
cvs.height = imageEle.naturalHeight;
var ctx = cvs.getContext("2d").drawImage(imageEle,0,0);
var newImageData = cvs.toDataURL('image/jpeg',0.5);
$('#uploadedImage').attr('src', newImageData);
};
};
reader.readAsDataURL(input.files[0]);
}
}
This should work fine, but what happens is the uploaded image after process appears to be a black box. Now I have read alot of similar questions, but in relation to compression I am not able to fix this issue. Also this works good with files (jpeg images) less than 500 Kb, but for real files that are 1 MB plus it gives a black box as data uri. It would be awesome if somebody could help me with this.
Thanks,
Vaibhav
There's a size limit to toDataURL use, see canvas.toDataURL() for large canvas for a similar question, and possible answers with server-side code.
If you're restricted to this client-side code, you'll have to reduce the size of the image before using toDataURL.

Why does canvas.toDataURL() not produce the same base64 as in Ruby for an image?

I'm trying to produce the same base64 data for an image file in both JavaScript and in Ruby. Unfortunately both are outputting two very different values.
In Ruby I do this:
Base64.encode64(File.binread('test.png'));
And then in JavaScript:
var image = new Image();
image.src = 'http://localhost:8000/test.png';
$(image).load(function() {
var canvas, context, base64ImageData;
canvas = document.createElement('canvas');
context = canvas.getContext('2d');
canvas.width = this.width;
canvas.height = this.height;
context.drawImage(this, 0, 0);
imageData = canvas.toDataURL('image/png').replace(/data:image\/[a-z]+;base64,/, '');
console.log(imageData);
});
Any idea why these outputs are different?
When you load the image in Ruby the binary file without any modifications will be encoded directly to base-64.
When you load an image in the browser it will apply some processing to the image before you will be able to use it with canvas:
ICC profile will be applied (if the image file contains that)
Gamma correction (where supported)
By the time you draw the image to canvas, the bitmap values has already been changed and won't necessarily be identical to the bitmap that was encoded before loading it as image (if you have an alpha channel in the file this may affect the color values when drawn to canvas - canvas is a little peculiar at this..).
As the color values are changed the resulting string from canvas will naturally also be different, before you even get to the stage of re-encoding the bitmap (as PNG is loss-less the encoding/compressing should be fairly identical, but factors may exist depending on the browser implementation that will influence that as well. to test, save out a black unprocessed canvas as PNG and compare with a similar image from your application - all values should be 0 incl. alpha and at the same size of course).
The only way to avoid this is to deal with the binary data directly. This is of course a bit overkill (in general at least) and a relative slow process in a browser.
A possible solution that works in some cases, is to remove any ICC profile from the image file. To save an image from Photoshop without ICC choose "Save for web.." in the file menu.
The browser is re-encoding the image as you save the canvas.
It does not generate an identical encoding to the file you rendered.
So I actually ended up solving this...
Fortunately I am using imgcache.js to cache images in the local filesystem using the FileSystem API. My solution is to use this API (and imgcache.js makes it easy) to get the base64 data from the actual cached copy of the file. The code looks like this:
var imageUrl = 'http://localhost:8000/test.png';
ImgCache.init(function() {
ImgCache.cacheFile(imageUrl, function() {
ImgCache.getCachedFile(imageUrl, function(url, fileEntry) {
fileEntry.file(function(file) {
var reader = new FileReader();
reader.onloadend = function(e) {
console.log($.md5(this.result.replace(/data:image\/[a-z]+;base64,/, '')));
};
reader.readAsDataURL(file);
});
});
});
});
Also, and very importantly, I had to remove line breaks from the base64 in Ruby:
Base64.encode64(File.binread('test.png')).gsub("\n", '');

canvas.toDataURL("image/png") - how does it work and how to optimize it

I wanted to know if there was anyone out there that knows how
canvas.toDataURL("image/png");
works? I want to understand better because at times it seems to really slow my computer down.
Is there a way to optimize the base64 image before during or after to get better performance ?
function base64(url) {
var dataURL;
var img = new Image(),
canvas = document.createElement("canvas"),
ctx = canvas.getContext("2d"),
src = url;
img.crossOrigin = "Anonymous";
img.onload = function () {
canvas.height = img.height;
canvas.width = img.width;
ctx.drawImage(img, 0, 0);
var dataURL = canvas.toDataURL('image/png');
preload(dataURL);
canvas = null;
};
img.src = url;
}
Basically this is my function but I wanted to see if there was a way to make this process perform better or if there was an alternative to canvas.toDataURL('image/png');
thanks
toDataURL() does the following when called (synchronously):
Creates a file header based on the file type requested or supported (defaults to PNG)
Compresses the bitmap data based on file format
Encodes the resulting binary file to Base-64 string
Prepends the data-uri header and returns the result
When setting a data-uri as source (asynchronously):
String is verified
Base-64 part is separated and decoded to binary format
Binary file verified then parsed and uncompressed
Resulting bitmap set to Image element and proper callbacks invoked
These are time-consuming steps and as they are internal we cannot tap into them for any reason. As they are pretty optimized as they are given the context they work in there is little we can do to optimize them.
You can experiment with different compression schemes by using JPEG versus PNG. They are using very different compression techniques and depending on the image size and content one can be better than the other in various situations.
My 2 cents..
The high performance alternative is canvas.toBlob. It is extremely fast, asynchronous, and produces a blob which can also be swapped to disk, and is subjectly speaking simply far more useful.
Unfortunately it is implemented in Firefox, but not in chrome.
Having carefully bench-marked this, there is no way around because canvas.toDataURL itself is the bottleneck by orders of magnitude.

FineUploader: Thumbnails on the fly clientside

I am working on a FineUploader implementation. Special request is to create thumbnails on the fly client-side and then upload those with the original image-upload.
I have an implementation that works on FF, but does not seem to work on iOs. It looks like so:
var uploader = new qq.FineUploaderBasic({
button: document.getElementById(buttonID),
request: {
endpoint: '/up/load/a/' + $('section#ajax-viewport').data('albumid')
},
callbacks: {
onSubmit: function(id, fileName) {
// getFile obtains the file being uploaded
file = this.getFile(id);
// create a thumbnail & upload it:
ThumbDown(file, id, 200, fileName);
},
}
})
This code calls a function:
function ThumbDown(file, id, dimension, fileName) {
var reader = new FileReader();
reader.onload = function(e) {
var img = document.createElement("img");
img.onload = function (ev) {
var thumbnailDimensions; // object holding width & height of thumbnail
var c=document.getElementById("canvas-for-thumbnails"); // must be a <canvas> element
var ctx=c.getContext("2d");
// set thumbnail dimensions of canvas:
thumbnailDimensions = calcThumbnailDimension (img.width, img.height, dimension )
c.width = thumbnailDimensions.width;
c.height = thumbnailDimensions.height;
var ctx = c.getContext("2d");
ctx.drawImage(img, 0, 0, c.width, c.height);
uploadThumbnail(c.toDataURL('image/jpeg'), //a base64 encoded representation of the image
id,
fileName); // we need filename to combine with mother-image on the server
};
img.src = e.target.result;
}
reader.readAsDataURL(file);
} // end function
Finally the Thumbnail is uploaded with a dumb ajax-call:
function uploadThumbnail (base64encodedString, id, fileName) {
$.post('/up/thumb',
{
img : base64encodedString,
id: id,
fileName: fileName
},
function(data) {});
}
My questions:
1) Currently I have two uploads: one for mother-image and another for thumbnail. I would like to combine this in one FineUploader call. However, I do not see a way to do this, due to the asynchronous nature of my thumbnail creation.
Am I missing something? Is this possible to reduce this to one FineUploader call?
2) This code uploads the thumbnails as a base64 encoded string. I would like to upload the thumbnail as an image (or as a blob ?). Perhaps by following this recipe of Jeremy Banks. Would that work with FineUploader?
3) Are there other options/methods of FineUploader that I have missed but I should be using?
Any help is, as always, greatly appreciated.
So, it is already trivial to upload the original image. Fine Uploader takes care of that for you. If I understand correctly, you want to also upload a scaled version of the image (which you have already generated). I suggest you take the image you have drawn onto the canvas and convert it to a Blob. Then, you can submit this Blob directly to Fine Uploader, where it will upload it for you.
For example, change the value of uploadThumbnail to this:
function uploadThumbnail(thumbnailDataUri, id, filename) {
var imageBlob = getImageBlob(thumbnailDataUri, "image/jpeg"),
blobData = {
name: filename,
blob: imageBlob
};
// This will instruct Fine Uploader to upload the scaled image
uploader.addBlobs(blobData);
}
function getImageBlob(dataUri, type) {
var binary = atob(dataUri.split(',')[1]),
array = [];
for(var i = 0; i < binary.length; i++) {
array.push(binary.charCodeAt(i));
}
return new Blob([new Uint8Array(array)], {type: type});
}
Note: the getImageBlob function was adapted from this Stack Overflow answer. If this works for you, be sure to upvote the answer I've linked to.
Server-side note
A Blob is pretty much a File without a name property. Your server-side code will handle the upload of a Blob pretty much the same way as it does a File or form submit containing a <input type="file"> form field. The only noticeable difference to your server will be the filename parameter value in the Content-Disposition header of the multipart boundary containing the file. To put it another way, your server may think the image is named "blob" or perhaps some other generic name, due to the way most browsers generate multipart encoded requests that contain Blob objects. Fine Uploader should be able to get around that by explicitly specifying a file name for the browser to include in blob's Content-Disposition header, but this ability does not have wide browser support. Fine Uploader gets around this limitation, to some degree, by including a "qqfilename" parameter with the request containing the actual name of the Blob.
Future native support for thumbnail generation & scaling
The plan is to add native support for thumbnail previews to Fine Uploader. This is covered in feature requests #868 and #896. There are other related feature requests open, such as image rotation and validation related to images. These features and other image-related features will likely be added to Fine Uploader in the very near future. Be sure to comment on the existing feature requests or add additional requests if you'd like.
As of version 4.4 of FineUploader, as Ray Nicholus pointed out would eventually happen, this functionality has been baked into their framework.
Here is an example of setting the upload sizes when creating a FineUploader instance:
var uploader = new qq.FineUploader({
...
scaling: {
sizes: [
{name: "small", maxSize: 100},
{name: "medium", maxSize: 300}
]
}
});
See their page on uploading scaled images.

Categories

Resources