Say I have an <img/>. This img has src='http://somelocation/getmypic'. At some later time, I may want to change the content of the img based on an ajax call that receives binary data. I only know once I have received the data in the ajax callback whether I want to apply it to the image. I might decide not to.
As far as I can tell from the research that I have done, the only way to set binary data on the img is by base64 encoding the binary data and setting the img src to the resultant base 64 encoded string.
My first and main question is: what is the overhead? Doesn't this mean that I have tripled the processing and memory usage for an image download? My ajax callback gets the binary data that has been loaded into memory, then I base 64 encode the binary stream and then (I assume) the browser decodes this stream back to binary? Or am I missing something and it's actually not that bad?
My secondary question is: Why is there no way to set binary data directly into the img without base64encoding? Most UI languages that I have used allow you to load an image into a 'PictureBox' (for example ) from file and also draw in the 'PictureBox'. Why is this not allowed with an img? I am thinking img.binary=bytes which would set src to 'binary' or something like that. It seems this functionality is split between img and canvas, but sometimes you want to do both. I am just wondering if I am missing something in my understanding (e.g. there is some design or implementation benefit in not allowing binary data to be set directly on img).
Thank you.
Well, this is what I came up with. I am not sure what the performance or memory usage is, but it is a lot easier than base64 encoding and it works with the asp.net MVC FileStreamResult. So, I thought that I would post it here in case it helps somebody else:
var oReq = new XMLHttpRequest();
oReq.open("post", '/somelocation/getmypic', true );
oReq.responseType = "blob";
oReq.onload = function ( oEvent )
{
var blob = oReq.response;
var imgSrc = URL.createObjectURL( blob );
var $img = $( '<img/>', {
"alt": "test image",
"src": imgSrc
} ).appendTo( $( '#bb_theImageContainer' ) );
window.URL.revokeObjectURL( imgSrc );
};
oReq.send( null );
The basic idea is that the data is not tampered with and it is received as a blob. A url is constructed for that blob (I think that essentially means that you can use a URL to access that blob in memory). But the good thing is that you can set the img src to this, which is...awesome.
This answer put me on the right track, then this page and this page on mdn. Note supported browsers.
Related
I want to resize image at client side using JavaScript. I found 2 solutions, one is using .toDataURL() function and the other is using .toBlob() function. Both solutions are worked. Well, I just curious what is the difference between those two functions? which one is better? or when should I use .toDataURL() function or .toBlob() function?
Here is the code I used to output those two function, and I got very slightly different in image size (bytes) and image color (I'm not sure about this one). Is something wrong with the code?
<html>
<head>
<title>Php code compress the image</title>
</head>
<body>
<input id="file" type="file" onchange="fileInfo();"><br>
<div>
<h3>Original Image</h3>
<img id="source_image"/>
<button onclick="resizeImage()">Resize Image</button>
<button onclick="compressImage()">Compress Image</button>
<h1 id="source_image_size"></h1>
</div>
<div>
<h3>Resized Image</h3>
<h1> image from dataURL function </h1>
<img id="result_resize_image_dataURL"/>
<h1> image from toBlob function </h1>
<img id="result_resize_image_toBlob"/>
</div>
<div>
<fieldset>
<legend>Console output</legend>
<div id='console_out'></div>
</fieldset>
</div>
<script>
//Console logging (html)
if (!window.console)
console = {};
var console_out = document.getElementById('console_out');
var output_format = "jpg";
console.log = function(message) {
console_out.innerHTML += message + '<br />';
console_out.scrollTop = console_out.scrollHeight;
}
var encodeButton = document.getElementById('jpeg_encode_button');
var encodeQuality = document.getElementById('jpeg_encode_quality');
function fileInfo(){
var preview = document.getElementById('source_image');
var file = document.querySelector('input[type=file]').files[0];
var reader = new FileReader();
reader.addEventListener("load", function(e) {
preview.src = e.target.result;
}, false);
if (file) {
reader.readAsDataURL(file);
}
}
function resizeImage() {
var loadedData = document.getElementById('source_image');
var result_image = document.getElementById('result_resize_image_dataURL');
var cvs = document.createElement('canvas'),ctx;
cvs.width = Math.round(loadedData.width/4);
cvs.height = Math.round(loadedData.height/4);
var ctx = cvs.getContext("2d").drawImage(loadedData, 0, 0, cvs.width, cvs.height);
cvs.toBlob(function(blob) {
var newImg = document.getElementById('result_resize_image_toBlob'),
url = URL.createObjectURL(blob);
newImg.onload = function() {
// no longer need to read the blob so it's revoked
URL.revokeObjectURL(url);
};
newImg.src = url;
console.log(blob.size/1024);
}, 'image/jpeg', 0.92);
var newImageData = cvs.toDataURL('image/jpeg', 0.92);
var result_image_obj = new Image();
result_image_obj.src = newImageData;
result_image.src = result_image_obj.src;
var head = 'data:image/png;base64,';
var imgFileSize = ((newImageData.length - head.length)*3/4)/1024;
console.log(imgFileSize);
}
Edited:
Based on Result of html5 Canvas getImageData or toDataURL - Which takes up more memory?, its said that
"DataURL (BASE64) is imageData compressed to JPG or PNG, then converted to string, and this string is larger by 37% (info) than BLOB."
What is the string mean? is it same as size in bytes? using the code I provided above, the size difference is less than 1Kb (less than 1%). is .toBlob() always better than .toDataURL() function? or there is a specific condition where it would be better to use .toDataURL() function?
Definitely go with toBlob().
toDataURL is really just an early error in the specs, and had it been defined a few months later, it certainly wouldn't be here anymore, since we can do the same but better using toBlob.
toDataURL() is synchronous and will block your UI while doing the different operations
move the bitmap from GPU to CPU
conversion to the image format
conversion to base64 string
toBlob() on the other hand will only do the first bullet synchronously, but will do the conversion to image format in a non blocking manner. Also, it will simply not do the third bullet.
So in raw operations, this means toBlob() does less, in a better way.
toDataURL result takes way more memory than toBlob's.
The data-URL returned by toDataURL is an USVString, which contains the full binary data compressed in base64.
As the quote in your question said, base64 encoding in itself means that the binary data will now be ~37% bigger. But here it's not only encoded to base64, it is stored using UTF-16 encoding, which means that every ascii character will take twice the memory needed by raw ascii text, and we jump to a 174% bigger file than its original binary data...
And it's not over... Every time you'll use this string somewhere, e.g as the src of a DOM element*, or when sending it through a network request, this string can get reassigned in memory once again.
*Though modern browsers seem to have mechanism to handle this exact case
You (almost) never need a data-url.
Everything you can do with a data-url, you can do the same better with a Blob and a Blob-URI.
To display, or locally link to the binary data (e.g for user to download it), use a Blob-URI, using the URL.createObjectURL() method.
It is just a pointer to the binary data held in memory that the Blob itself points to. This means you can duplicate the blob-URI as many times as you wish, it will always be a ~100 chars UTF-16 string, and the binary data will not move.
If you wish to send the binary data to a server, send it as binary directly, or through a multipart/formdata request.
If you wish to save it locally, then use IndexedDB, which is able to save binary files. Storing binary files in LocalStorage is a very bad idea as the Storage object has to be loaded at every page-load.
The only case you may need data-urls is if you wish to create standalone documents that need to embed binary data, accessible after the current document is dead. But even then, to create a data-url version of the image from your canvas, use a FileReader to which you'd pass the Blob returned by canvas.toBlob(). That would make for a complete asynchronous conversion, avoiding your UI to be blocked for no good reasons.
I keep facing a problem converting XHR request response properly. I've read couple of solutions on Stackoverflow and on some other platforms but couldn't success to convert string to dataURL.
I've tried to convert string to through atob() and btoa() methods mention on MDN and this Stackoverflow post.
+ I've keep dive much to find a solution within lost of different methods but non of them took me aim.
How can I convert this response below to a dataURL properly to be able set the URL as image src?
XHR Request Response:
"�PNG
IHDR�� m"H3PLTE���������������������������������������������������?�U�iIDATx�흋�� Eۈ�Z����R�[���BB�]�_�Y�� x��a�a�a�a�a�a�a�a��-��nFI]]�}_x�}�����$ʆ�h�s�_4M_w�WI��*��.o���"��~۔�C^uڟ�P5W�'�[��(��}�=)���U�����Z�J#U��G�'�8G�ۓH�>��E���G�>݁K�ތ4l��C��
v�xu���?�*R��^������ł�B[�YkkY����=�
l|H�s7Yp"���+6zV�cvSj+�
�}�c�c݄Pކ
~�W�N֨(�3����0[QƄ�Q�)o�_R,�]J����G�b��?��M 9� w*h��!=�%"�4������˔*a��
���6��w'��>��el3�e�l����c�ݍ(U.p�Q2�э����#��BɺB�h�4Sh��I�
�s��B���P735���(L�KU_�����s��v�]~�������6�+
/��iD��� �����uԏf�ﳽ�}nA�1_7��t�
+m�2W���P:�8�N�.Ԉ}�KQ�`�G0�P�Y�}�=A|�$� ��Xǭ�)w���>����m�J�R�֖��~}�n�#�G��7�ͽ���d������58C��i�6|�&�ۄ?pIl:P�l *FE�
q��wj��v��6�.�BQ�߁�����GBm�{�(�
�!f�k�Df*?}�+�N��"u��V,N��.eҚj��r�t��А�r��>)��*a��J����4�U���L��Z�/�ҵ���e=�;Qp����=x�[5=N��:8O}��k�?�Rr"�ma��tڱQ�I���R���=ܣU�MI�3Y�6\�~�v�.yJ�)��q�&���/�_�R�A-����{Ҡ�$��RNx%�}'�D�Tm�d9��}�n��~�kz��Ӝ���K� b��] L|�iqo3�O��p��l�� 'd�$�D�!:e��F=����'�"7g�F=b��7+Qܤ֩n�_"��c�����w$~`�V�"��I�{�&R̰G�O�|��%� ��/�L���>� �wb�S����- �3O����*J��7�͈�
�m�JL�Fdҗ��0��>���0��<����0�I!�33,y
d>3Ò�<y��)�'�pxS�E����;�����)�2�p��Dt��P�*�����Y�.�܈�
��D�5�Y�Ld����l��cb�lQ�|4�DED�͒�n
^3��&F!|��D��?�'��q
G�jN�h�\� 2ODVu�d���'���LLs_�ۏ�>CV� �gQ�{���e�
���2��*�Q�>|y��fg�M9a��E�IEND�B`�"
UPDATE:
I've succeed to render image through this post #
https://stackoverflow.com/a/8022521/7163711
Your XHR data is a png file, as expected. In order to show you how to embed it properly, I've taken the liberty of generating a "better" PNG - a 10x10 yellow square. it is base64-encoded in b64data on the first line of the snippet. The second line sets yourData as what you would have gotten from an XHR.
var b64data = "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFklEQVR42mP8/5/hDAMRgHFUIX0VAgBfZRvvtreybgAAAABJRU5ErkJggg==";
// Assuming your data is a raw PNG
var yourData = atob(b64data);
// We convert...
var btoa_data = btoa(yourData);
var elem = document.createElement("img");
elem.src = "data:image/png;base64,"+btoa_data;
document.getElementById("app").append(elem);
console.log(atob(b64data));
<div id="app"></div>
As you can see, btoa() works in order to base64-encode data; the only thing you need to do after this is to indicate in the src part of your img tag that:
It is a PNG (if it isn't, or if you are not sure if it will be, you'll need to check the first few bytes of your reply)
It is base64-encoded
i'm not 100% sure but from what i read when i send a blob (binary data) over websocket, the blob does not contain any file information. (Also the official specification states that wesockets only send the raw binary)
the filesize
the mimetype
user info (explain later)
i'm using https://github.com/websockets/ws
Testing:
Sending directly the blob from an input file.
ws.send(this.files[0]) //this should already contain the info
Creating a new blob with the native javascript api from file setting the proper mimetype.
ws.send(new Blob([this.files[0]],{type:this.files[0].type})); //also this
on both sides you can get only the effective blob without any other information.
Is it possible to append let's say a 4kb predefined json data converted also to binary that contains important information like the mimetype and the filesize,
and then just split off the 4kb when needed?
{"mime":"txt/plain","size":345}____________4KB_REST_OF_THE_BINARY
OR
ws.send({"mime":"txt\/plain","size":345})
ws.send(this.files[0])
Even if the first one is the worst solution ever it would allow me to send everything in one time.
The second one has a big problem:
it's a chat that allows to send also files like documents,images,music videos.
i could write some sort of handshaking system when sending the file/user info before i send the binary data.
BUT
if another person sends also a file, as it's async, the handshaking system has no chance to determine wich file is the right one for the correct user and mimetype.
So how do you properly send a binary file in a multiuser async envoirement?
i know i can convert to base64 but thats 30% bigger.
btw. Totally disappointed with Apple... while chrome shows every binary data properly, my ios devices are not able to handle blob's, only images will show in blob or base64 format, not even a simple txt file. Basically only a <img> tag can read dynamic files.
How everything works (now):
user sends a file
nodejs gets the binary data, also user info... but not mimetype,filename,size.
nodejs broadcasts the raw binary file to all the users.(can't specify user & file info)
clients create a bloburl (who send that? XD).
EDIT
what i have now:
client 1 (sends a file)CHROME
fileInput.addEventListener('change',function(e){
var file=this.files[0];
ws.send(new Blob([file],{
type:file.type //<- SET MIMETYPE
}));
//file.size
},false);
note: file is already a blob ... but this is how you would normally create a new blob specifying the mimetype.
server (broadcasts the binary data to the other clients)NODEJS
aaaaaand the mimetype is gone...
ws.addListener('message',function(binary){
var b=0,c=wss.clients.length;
while(b<c){
wss.clients[b++].send(binary)
}
});
client 2 (recieves the binary)CHROME
ws.addEventListener('message',function(msg){
var blob=new Blob([msg.data],{
type:'application/octet-stream' //<- LOST
});
var file=window.URL.createObjectURL(blob);
},false);
note: m.data is already a blob ... but this is how you would normally create a new blob specifying the mimetype witch is lost.
In client 2 i need the mimetype and naturally i also need the info about the user, wich can be retrieved from client 1 or the server (not a good choice)...
You're a bit out of luck with this because Node doesn't support the Blob interface and so any data you send or receive in Binary with Node is just Binary. You would have to have something that knew how to interpret a Blob object.
Here's an idea, and let me know if this works. Reading through the documentation for websockets\ws it says it supports sending and receiving ArrayBuffers. Which means you can use TypedArrays.
Here's where it gets nasty. You set a certain fixed n number of bytes at the beginning of every TypedArray to signal the mime type encoded in utf8 or what have you, and the rest of your TypedArray contains your file's bytes.
I would recommend using UInt8Array because utf8 characters are 8 bits long and your text will probably be readable when encoded that way. As for the file bits you'll probably just end up writing those down somewhere and appending an ending to it.
Also note, this method of interpretation works both ways whether from Node or in the Browser.
This solution is really just a form of type casting and you might get some unexpected results. The fixed length of your mime type field is crucial.
Here it is illustrated. Copy, paste, set the image file to whatever you want and then run that. You'll see the mime type I set pop out.
var fs = require('fs');
//https://stackoverflow.com/questions/8609289/convert-a-binary-nodejs-buffer-to-javascript-arraybuffer
function toUint8Array(buffer) {
var ab = new ArrayBuffer(buffer.length);
var array = new Uint8Array(ab);
for (var i = 0; i < buffer.length; ++i) {
array[i] = buffer[i];
}
return array;
}
//data is a raw Buffer object
fs.readFile('./ducklings.png', function (err, data) {
var mime = new Buffer('image/png');
var allBuffed = Buffer.concat([mime, data]);
var array = toUint8Array(allBuffed);
var mimeBytes = array.subarray(0,9); //number of characters in mime Buffer
console.log(String.fromCharCode.apply(null, mimeBytes));
});
Here's how you do it on the client side:
SOLUTION A: GET A PACKAGE
Get buffer, an implementation of Node's Buffer API for browsers. The solution to concatenate Byte buffers will work exactly as before. You can append fields like To: and what not as well. The way you format your headers in order to best serve your clients will be an evolving process I'm sure.
SOLUTION B: OLD SCHOOL
STEP 1: Convert your Blob to an ArrayBuffer
Notes: How to convert a String to an ArrayBuffer
var fr = new FileReader();
fr.addEventListener('loadend', function () {
//Asynchronous action in part 2.
var message = concatenateBuffers(headerStringAsBuffer, fr.result);
ws.send(message);
});
fr.readAsArrayBuffer(blob);
STEP 2: Concatenate ArrayBuffers
function concatenateBuffers(buffA, buffB) {
var byteLength = buffA.byteLength + buffB.byteLength;
var resultBuffer = new ArrayBuffer(byteLength);
//wrap ArrayBuffer in a typedArray/view
var resultView = new Uint8Array(resultBuffer);
var viewA = new Uint8Array(resultBuffer);
var viewB = new Uint8Array(resultBuffer);
//Copy 8 bit integers AKA Bytes
resultView.set(viewA);
resultView.set(viewB, viewA.byteLength);
return resultView.buffer
}
STEP 3: Receive and Reblob
I'm not going to repeat how to convert the concatenated String bytes back into a string because I've done it in the server example, but for turning the file bytes into a blob of your mime type is fairly simple.
new Blob(buffer.slice(offset, buffer.byteLength), {type: mimetype});
This Gist by robnyman goes into further details on how you would use an image transmitted via XHR, put it into localstorage, and use it in an image tag on your page.
I liked #Breedly's idea of prepending a fixed length byte array to indicate mime type of the ArrayBuffer so I created this npm package that I use when dealing with websockets but maybe others' might find it useful.
Example usage
const {
arrayBufferWithMime,
arrayBufferMimeDecouple
} = require('arraybuffer-mime')
// some image array buffer
const uint8 = new Uint8Array(1)
uint8[0] = 1
const ab = uint8.buffer
const mime = 'image/png'
const abWithMime = arrayBufferWithMime(ab, mime)
const {mime, arrayBuffer} = arrayBufferMimeDecouple(abWithMime)
console.log(mime) // "image/png"
console.log(arrayBuffer) // ArrayBuffer
I'm working on a testing framework that needs to pass files to the drop listener of a PLUpload instance. I need to create blob objects to pass inside a Data Transfer Object of the sort generated on a Drag / Drop event. I have it working fine for text files and image files. I would like to add support for PDF's, but it seems that I can't get the encoding right after retrieving the response. The response is coming back as text because I'm using Sahi to retrieve it in order to avoid Cross-Domain issues.
In short: the string I'm receiving is UTF-8 encoded and therefore the content looks like you opened a PDF with a text editor. I am wondering how to convert this back into the necessary format to create a blob, so that after the document gets uploaded everything looks okay.
What steps do I need to go through to convert the UTF-8 string into the proper blob object? (Yes, I am aware I could submit an XHR request and change the responseType property and (maybe) get closer, however due to complications with the way Sahi operates I'm not going to explain here why I would prefer not to go this route).
Also, I'm not familiar enough but I have a hunch maybe I lose data by retrieving it as a string? If that's the case I'll find another approach.
The existing code and the most recent approach I have tried is here:
var data = '%PDF-1.7%����115 0 obj<</Linearized 1/L ...'
var arr = [];
var utf8 = unescape(encodeURIComponent(data));
for (var i = 0; i < utf8.length; i++) {
arr.push(utf8.charCodeAt(i));
}
var file = new Blob(arr, {type: 'application/pdf'});
It looks like you were close. I just did this for a site which needed to read a PDF from another website and drop it into a fileuploader plugin. Here is what worked for me:
var url = "http://some-websites.com/Pdf/";
//You may not need this part if you have the PDF data locally already
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (this.readyState == 4 && this.status == 200) {
//console.log(this.response, typeof this.response);
//now convert your Blob from the response into a File and give it a name
var fileOfBlob = new File([this.response], 'your_file.pdf');
// Now do something with the File
// for filuploader (blueimp), just use the add method
$('#fileupload').fileupload('add', {
files: [ fileOfBlob ],
fileInput: $(this)
});
}
}
xhr.open('GET', url);
xhr.responseType = 'blob';
xhr.send();
I found help on the XHR as blob here. Then this SO answer helped me with naming the File. You might be able to use the Blob by itself, but you won't be able to give it a name unless its passed into a File.
Is there a way to use jQuery or Javascript to detect the colorspace of a JPG image that is being uploaded via file input? To be more accurate I need a way to kick back a response when a user attempts to upload a CMYK colorspaced JPG. I would like to catch this on the client side if at all possible. I would prefer to do this using jQuery/js, but I'm definitely open to libraries that may have this functionality. Thanks in advance for any help.
I've looked around a bit, and it seems like JavaScript could be powerful enough to do this, albeit possibly very slowly.
Here is a stackoverflow question that deals with using JavaScript for face detection: Any library for face recognition in JavaScript?
I know that doesn't answer the question specifically, but it does demonstrate that JavaScript could potentially be leveraged for what you want.
Here is a question that deals with using JS along with the canvas object to get the average color of an image: Get average color of image via Javascript
There are some answers there that may be helpful for you. One of the answers there links to a library called Color Thief that could do what you want: https://github.com/lokesh/color-thief
If no one has already built what you want, my takeaway here is that it would be a laborious task, which may or may not yield results fast enough to be useful.
I had a comment that you could do a jquery ajax call to get the raw data. I don't think that is actually true, as there is no way to get the raw data. However, many browsers support the "arraybuffer" responseType of an XML http request, and you can use that. Here is a code snippet:
var img_url = "/images/testimg.jpg";
var req = new XMLHttpRequest();
req.open("GET", img_url, true);
req.responseType = "arraybuffer";
req.onload = function (event)
{
var buf = req.response;
var num_channels = null;
if (buf)
{
var img = new Uint8Array(buf);
for (var i=0; i<img.length-9; i++)
{
if (img[i]==0xFF && (img[i+1]==0xC0 || img[i+1]==0xC1))
{
var num_channels = img[i+9];
break;
}
}
}
if (num_channels==4)
console.log("This is a four-channel image.");
};
req.send(null);
Obviously, you'd want to actually do something with the result, and there should be better checking to make sure this IS a JPEG and not a random collection of bytes. Furthermore, this only will work on Baseline and Extended JPEGs, not progressive, arithmetic encoding, JPEG 2000, or anything else.
This doesn't really test if the colorspace is CMYK (or YCCK), just that it has four components, but the basic idea is there.