JS Upload image from Canvas without "toDataURL" method - javascript

I'm trying to upload image from canvas to the server. The common solution is to get data from canvas using toDataURL method but unfortunately some images leads Chrome to crush. I was trying to use image/png and image/jpeg mime types, I was also trying to reduce quality for image/jpeg up to 0.4 but Chrome (vesrion 45.0.2454.85 m) was crushing anyway. Is there any way to extract image from cavnas without this method, like retrieving Blob object or something like this?

Your problem is that the images are too big, and chrome fails with an Out of memory crash. Maybe your solution is reduce the quality of the images returning it in JPEG.
var fullQuality = canvas.toDataURL("image/jpeg", 1.0);
var mediumQuality = canvas.toDataURL("image/jpeg", 0.5);
var lowQuality = canvas.toDataURL("image/jpeg", 0.1);
You can try too the image format image/webp special in chrome.
Good luck.
EDIT
You've got the method toBlob() available too, if you need to change the logic:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toBlob
var canvas = document.getElementById("canvas");
canvas.toBlob(function(blob) {
var newImg = document.createElement("img"),
url = URL.createObjectURL(blob);
newImg.onload = function() {
// no longer need to read the blob so it's revoked
URL.revokeObjectURL(url);
};
newImg.src = url;
document.body.appendChild(newImg);
});
EDIT 2
According to MDN website, you have a polyfill.
if (!HTMLCanvasElement.prototype.toBlob) {
Object.defineProperty(HTMLCanvasElement.prototype, 'toBlob', {
value: function (callback, type, quality) {
var binStr = atob( this.toDataURL(type, quality).split(',')[1] ),
len = binStr.length,
arr = new Uint8Array(len);
for (var i=0; i<len; i++ ) {
arr[i] = binStr.charCodeAt(i);
}
callback( new Blob( [arr], {type: type || 'image/png'} ) );
}
});
}

Related

Store image content not image path in mongodb

I have seen many questions and solutions for this now. I am new to Mongo DB and MEAN stack development. I want to know whether there is anyway to store image content itself rather than path of the image file in Mongo DB. All the solutions suggests to store image as buffer and then use it back in the source by converting buffer to base64. I did it but the resulting output get resolves to path to the image file rather than the image content. I am looking to save image itself in DB.
// saving image
var pic = {name : "profilePicture.png",
img : "images/default-profile-pic.png",
contentType : "image/png"
};
//schema
profilePic:{ name: String, img: Buffer, contentType: String }
//retrieving back
var base64 = "";
var bytes = new Uint8Array( profilePic.img.data );
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
base64 += String.fromCharCode( bytes[ i ] );
}
var proPic = "data:image/png;base64," + base64;
console.log(proPic);
//console output
data:image/png;base64,images/default-profile-pic.png
The output for proPic resolves to "data:image/png;base64,images/default-profile-pic.png"
few links that I referred before posting this
How to do Base64 encoding in node.js?
How to convert image into base64 string using javascript
The problem is simply, that you don't read and encode the picture. Instead you use the path as a string.
Serverside using Node
If you want to perform it on the serverside with an image on the filesystem you can use something along following:
var fs = require('fs');
// read and convert the file
var bitmap = fs.readFileSync("images/default-profile-pic.png");
var encImage = new Buffer(bitmap).toString('base64');
// saving image
var pic = {name : "profilePicture.png",
img : encImage,
contentType : "image/png"
};
....
Clientside
Again we need to load the image and encode it as base64. There is an answer about doing this on the client here.
using the first approach the result would be something like following:
function toDataUrl(url, callback, outputFormat){
var img = new Image();
img.crossOrigin = 'Anonymous';
img.onload = function(){
var canvas = document.createElement('CANVAS');
var ctx = canvas.getContext('2d');
var dataURL;
canvas.height = this.height;
canvas.width = this.width;
ctx.drawImage(this, 0, 0);
dataURL = canvas.toDataURL(outputFormat);
callback(dataURL);
canvas = null;
};
img.src = url;
}
toDataUrl("images/default-profile-pic.png", function(encImage){
// saving image
var pic = {name : "profilePicture.png",
img : encImage,
contentType : "image/png"
};
//Proceed in the callback or use a method to pull out the data
....
});
Below two links saved my time. If we use "ng-file-upload" our life becomes easy from there.
https://github.com/danialfarid/ng-file-upload#install
https://github.com/danialfarid/ng-file-upload
Below is what worked for me
//my html code
<div>
<button type="file" ngf-select="onFileSelect($file)" ng-model="file" name="file" ngf-pattern="'image/*'"
ngf-accept="'image/*'" ngf-max-size="15MB" class="btn btn-danger">
Edit Profile Picture</button>
</div>
//my js function
function onFileSelect(file){
//var image = document.getElementById('uploadPic').files;
image = file;
if (image.type !== 'image/png' && image.type !== 'image/jpeg') {
alert('Only PNG and JPEG are accepted.');
return;
}
$scope.uploadInProgress = true;
$scope.uploadProgress = 0;
var reader = new window.FileReader();
reader.readAsDataURL(image);
reader.onloadend = function() {
base64data = reader.result;
$scope.profile.profilePic = base64data;
ProfileService.updateProfile($scope.profile).then(function(response){
$rootScope.profile = response;
$scope.profilePicture = $rootScope.profile.profilePic;
});
}
}
// when reading from the server just put the profile.profilePic value to src
src="data:image/png;base64,{base64 string}"
// profile schema
var ProfileSchema = new mongoose.Schema({
userid:String,
//profilePic:{ name: String, img: Buffer, contentType: String },
profilePic:String
}
I wouldn't say this is the best solution but a good place to start.Also this limits you from uploading file size more than 16 MB in which case you can use"GridFs" in the above implementation initially the file is converted to "blob" and then I am converting it to "base64" format and adding that to my profile's string variable.
Hope this helps someone in saving their time.

Load file into IMAGE object using Phantom.js

I'm trying to load image and put its data into HTML Image element but without success.
var fs = require("fs");
var content = fs.read('logo.png');
After reading content of the file I have to convert it somehow to Image or just print it to canvas. I was trying to conver binary data to Base64 Data URL with the code I've found on Stack.
function base64encode(binary) {
return btoa(unescape(encodeURIComponent(binary)));
}
var base64Data = 'data:image/png;base64,' +base64encode(content);
console.log(base64Data);
Returned Base64 is not valid Data URL. I was trying few more approaches but without success. Do you know the best (shortest) way to achieve that?
This is a rather ridiculous workaround, but it works. Keep in mind that PhantomJS' (1.x ?) canvas is a bit broken. So the canvas.toDataURL function returns largely inflated encodings. The smallest that I found was ironically image/bmp.
function decodeImage(imagePath, type, callback) {
var page = require('webpage').create();
var htmlFile = imagePath+"_temp.html";
fs.write(htmlFile, '<html><body><img src="'+imagePath+'"></body></html>');
var possibleCallback = type;
type = callback ? type : "image/bmp";
callback = callback || possibleCallback;
page.open(htmlFile, function(){
page.evaluate(function(imagePath, type){
var img = document.querySelector("img");
// the following is copied from http://stackoverflow.com/a/934925
var canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
// Copy the image contents to the canvas
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
// Get the data-URL formatted image
// Firefox supports PNG and JPEG. You could check img.src to
// guess the original format, but be aware the using "image/jpg"
// will re-encode the image.
window.dataURL = canvas.toDataURL(type);
}, imagePath, type);
fs.remove(htmlFile);
var dataUrl = page.evaluate(function(){
return window.dataURL;
});
page.close();
callback(dataUrl, type);
});
}
You can call it like this:
decodeImage('logo.png', 'image/png', function(imgB64Data, type){
//console.log(imgB64Data);
console.log(imgB64Data.length);
phantom.exit();
});
or this
decodeImage('logo.png', function(imgB64Data, type){
//console.log(imgB64Data);
console.log(imgB64Data.length);
phantom.exit();
});
I tried several things. I couldn't figure out the encoding of the file as returned by fs.read. I also tried to dynamically load the file into the about:blank DOM through file://-URLs, but that didn't work. I therefore opted to write a local html file to the disk and open it immediately.

Using PDFkit in browser, inserting an image from a link

Is there a simple way to get an image from a url to put in a PDFKit pdf?
I have a PDF being automatically generated in-browser. There's an image I want included, to which I have a URL. The catch is that I'm generating the PDF in-browser. Since I have the URL available from the internet, it seems like there should be an easy way to turn that image into something readable by PDFKit.
Is there a way for Javascript to turn an image URL into a buffer readable by PDFKit?
What I want is what you'd like the following command to do:
doc.image('http://upload.wikimedia.org/wikipedia/commons/0/0c/Cow_female_black_white.jpg')
Thanks in advance. The solutions I found online have your server take in the link, and respond with a buffer. Is this the only way? Or is there a way all in-browser with no http posting?
This is a pretty old question but I'll add my notes since it's the first suggestion when looking for "pdfkit browser image" on Google.
I based my solution on the data uri option supported by PDFKit:
Just pass an image path, buffer, or data uri with base64 encoded data
to the image method along with some optional arguments.
So after a quick look around I found the general approach to get a data uri from an image URL was using canvas, like in this post. Putting it together in PDFKit's interactive browser demo:
function getDataUri(url, callback) {
var image = new Image();
image.crossOrigin = 'anonymous'
image.onload = function () {
var canvas = document.createElement('canvas');
canvas.width = this.naturalWidth; // or 'width' if you want a special/scaled size
canvas.height = this.naturalHeight; // or 'height' if you want a special/scaled size
canvas.getContext('2d').drawImage(this, 0, 0);
// // Get raw image data
// callback(canvas.toDataURL('image/png').replace(/^data:image\/(png|jpg);base64,/, ''));
// ... or get as Data URI
callback(canvas.toDataURL('image/png'));
};
image.src = url;
}
// Usage
getDataUri('http://pdfkit.org/docs/img/14.png', function(dataUri) {
// create a document and pipe to a blob
var doc = new PDFDocument();
var stream = doc.pipe(blobStream());
doc.image(dataUri, 150, 200, {
width: 300
});
// end and display the document in the iframe to the right
doc.end();
stream.on('finish', function() {
iframe.src = stream.toBlobURL('application/pdf');
});
});
I retrieve the image via AJAX as a base64-encoded string, then use the following code to convert the base64-encoded string into a usable buffer:
var data = atob(base64);
var buffer = [];
for (var i = 0; i < data.length; ++i)
buffer.push(data.charCodeAt(i));
buffer._isBuffer = true;
buffer.readUInt16BE = function(offset, noAssert) {
var len = this.length;
if (offset >= len) return;
var val = this[offset] << 8;
if (offset + 1 < len)
val |= this[offset + 1];
return val;
};
pdf.image(buffer);
See also https://github.com/devongovett/pdfkit/issues/354#issuecomment-68666894, where the same issue is discussed as applied to fonts.
I'll weigh my 2 cents on the issue as I just spent a good deal of time getting it to work. It's a medley of answers I've found googling the issue.
var doc = new PDFDocument();
var stream = doc.pipe(blobStream());
var files = {
img1: {
url: 'http://upload.wikimedia.org/wikipedia/commons/0/0c/Cow_female_black_white.jpg',
}
};
Use the above object at a place to store all of the images and other files needed in the pdf.
var filesLoaded = 0;
//helper function to get 'files' object with base64 data
function loadedFile(xhr) {
for (var file in files) {
if (files[file].url === xhr.responseURL) {
var unit8 = new Uint8Array(xhr.response);
var raw = String.fromCharCode.apply(null,unit8);
var b64=btoa(raw);
var dataURI="data:image/jpeg;base64,"+b64;
files[file].data = dataURI;
}
}
filesLoaded += 1;
//Only create pdf after all files have been loaded
if (filesLoaded == Object.keys(files).length) {
showPDF();
}
}
//Initiate xhr requests
for (var file in files) {
files[file].xhr = new XMLHttpRequest();
files[file].xhr.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
loadedFile(this);
}
};
files[file].xhr.responseType = 'arraybuffer';
files[file].xhr.open('GET', files[file].url);
files[file].xhr.send(null);
}
function showPDF() {
doc.image(files.img1.data, 100, 200, {fit: [80, 80]});
doc.end()
}
//IFFE that will download pdf on load
var saveData = (function () {
var a = document.createElement("a");
document.body.appendChild(a);
a.style = "display: none";
return function (blob, fileName) {
var url = window.URL.createObjectURL(blob);
a.href = url;
a.download = fileName;
a.click();
window.URL.revokeObjectURL(url);
};
}());
stream.on('finish', function() {
var blob = stream.toBlob('application/pdf');
saveData(blob, 'aa.pdf');
});
The biggest issue I came across was getting the info from the arraybuffer type to a string with base64 data. I hope this helps!
Here is the js fiddle where most of the xhr code came from.
I did it using NPM package axios to get a base64 encoded buffer:
on the project folder:
npm i axios
code:
var axios = require('axios');
let image = await axios.get("url", {responseType: 'arraybuffer'});
doc.image(image.data, 12, h, {
width: 570,
align: 'center',
valign: 'center'
});

Save captured png as arraybuffer

I'm trying to save an image to dropbox, and having trouble getting the convertion correct. I have an img (captured using this sample) and I want to store it to dropbox that accepts an ArrayBuffer (sample here)
This is the code I found that should to the two conversions, first to a base64, then into a ArrayBuffer
function getBase64Image(img) {
// Create an empty canvas element
var canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
// Copy the image contents to the canvas
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
// Get the data-URL formatted image
// Firefox supports PNG and JPEG. You could check img.src to
// guess the original format, but be aware the using "image/jpg"
// will re-encode the image.
var dataURL = canvas.toDataURL("image/png");
return dataURL.replace(/^data:image\/(png|jpg);base64,/, "");
}
function base64ToArrayBuffer(string_base64) {
var binary_string = window.atob(string_base64);
var len = binary_string.length;
var bytes = new Uint8Array(len);
for (var i = 0; i < len; i++) {
var ascii = binary_string.charCodeAt(i);
bytes[i] = ascii;
}
return bytes.buffer;
}
Saving is started like this
var img = $('#show-picture')[0];
var data = base64ToArrayBuffer( getBase64Image(img));
dropbox.client.writeFile(moment().format('YYYYMMDD-HH-mm-ss')+'.png', data, function (error, stat) {
if (error) {
return dropbax.handleError(error);
}
// The image has been succesfully written.
});
Problem is that I get a corrupted file saved, and is a bit confused on what's wrong.
*EDIT *
Here's the link to the original file
https://www.dropbox.com/s/ekyhvu2t6d8ldh3/original.PNG and here to the corrupted. https://www.dropbox.com/s/f0oevj1z33brpur/20131219-22-23-14.png
I'm using this version of the dropbox.js: //cdnjs.cloudflare.com/ajax/libs/dropbox.js/0.10.2/dropbox.min.js
As you can see the corrupted is slighty bigger 23,3KB vs 32,6 KB
Thanks for any help
Larsi
Moving my comment to an answer, since it seems that this works in the latest Datastore JS SDK but perhaps not in dropbox.js 0.10.2.
What browser and what version of the Dropbox library? And what's wrong with the image that's saved? (I assume by "corrupted" you mean that it won't open in whatever tool you're using... any more hints? Is the file size reasonable?) I just did a very similar test (toDataURL, atob, and Uint8Array) with Chrome on OS X and dropbox.com/static/api/dropbox-datastores-1.0-latest.js, and it seems to work.

Setting img.src to dataUrl Leaks Memory

Below I've created a simple test case that shows that when an img tag's src is set to different dataUrls, it leaks memory. It looks like the image data is never unloaded after the src is changed to something else.
<!DOCTYPE html>
<html>
<head>
<title>Leak Test</title>
<script type="text/javascript">
canvas = null;
context = null;
image = null;
onLoad = function(event)
{
canvas = document.getElementById('canvas');
context = canvas.getContext('2d');
image = document.getElementById('image');
setTimeout(processImage, 1000);
}
processImage = function(event)
{
var imageData = null;
for (var i = 0; i < 500; i ++)
{
context.fillStyle = "rgba(" + Math.floor(Math.random() * 256) + "," + Math.floor(Math.random() * 256) + "," + Math.floor(Math.random() * 256) + "," + Math.random() +")";
context.fillRect(0, 0, canvas.width, canvas.height);
imageData = canvas.toDataURL("image/jpeg", .5);
image.src = imageData;
}
setTimeout(processImage, 1000);
}
</script>
</head>
<body onload="onLoad(event)">
<canvas id="canvas"></canvas>
<img id="image"></img>
</body>
</html>
If you load this html page, RAM usage builds over time and is never cleaned up. This issue looks very similar: Rapidly updating image with Data URI causes caching, memory leak . Is there anything I can do to prevent this memory leak?
I ended up doing a work around for the issue. The memory bloat only happens when the image.src is changed, so I just bypassed the Image object altogether. I did this by taking the dataUrl, converting it into binary (https://gist.github.com/borismus/1032746) then parsing it using jpg.js (https://github.com/notmasteryet/jpgjs). Using jpg.js I can then copy the image back to my canvas, so the Image element is completely bybassed thus negating the need to set its src attribute.
Panchosoft's answer solved this for me in Safari.
This workaround avoids the memory increase by bypassing the leaking Image object.
// Methods to address the memory leaks problems in Safari
var BASE64_MARKER = ';base64,';
var temporaryImage;
var objectURL = window.URL || window.webkitURL;
function convertDataURIToBlob(dataURI) {
// Validate input data
if(!dataURI) return;
// Convert image (in base64) to binary data
var base64Index = dataURI.indexOf(BASE64_MARKER) + BASE64_MARKER.length;
var base64 = dataURI.substring(base64Index);
var raw = window.atob(base64);
var rawLength = raw.length;
var array = new Uint8Array(new ArrayBuffer(rawLength));
for(i = 0; i < rawLength; i++) {
array[i] = raw.charCodeAt(i);
}
// Create and return a new blob object using binary data
return new Blob([array], {type: "image/jpeg"});
}
then, in the processImage rendering loop:
// Destroy old image
if(temporaryImage) objectURL.revokeObjectURL(temporaryImage);
// Create a new image from binary data
var imageDataBlob = convertDataURIToBlob(imageData);
// Create a new object URL
temporaryImage = objectURL.createObjectURL(imageDataBlob);
// Set the new image
image.src = temporaryImage;
I'm also experiencing this issue and I do believe it's a browser bug. I see this happening in FF and Chrome as well. At least Chrome once had a similar bug that was fixed. I think it's not gone or not completely gone. I see a constant increase in memory when I set img.src repeatedly to unique images. I have filed a bug with Chromium, if you want to put some weight in :)
https://code.google.com/p/chromium/issues/detail?id=309543&thanks=309543&ts=1382344039
(The bug triggering example does not necessarily generate a new unique image every time around, but at at least it does with a high probability)
Some solutions not mentioned in the other answers:
For browser
jpeg-js
Similar to jpgjs mentioned by #PaulMilham, but with additional features and a nicer API (imo).
For NodeJS/Electron
sharp
General purpose image-processing library for NodeJS, with functionality to both read and write jpeg, png, etc. images (as files, or just in memory).
Since my program is in Electron, I ended up using sharp, as jpeg-js mentioned it as a more performant alternative (due to its core being written in native code).
Setting the source to a fixed minimal dataURI after handling the image seems to fix the issue for me:
const dummyPng = 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAC0lEQVQYV2NgAAIAAAUAAarVyFEAAAAASUVORK5CYII=';
img.onload = () => {
// ... process the image
URL.revokeObjectURL(img.src);
img.onload = null;
img.src = dummyPng;
};
img.src = URL.createObjectURL(new window.Blob([new Uint8Array(data)], {type: 'image/png'}));

Categories

Resources