JIRA Ticket created due to base64encode failure: https://jira.appcelerator.org/browse/TC-5876
My Current CFG:
Titanium SDK 5.1.2.GA
Testing on an iPhone iOS 9.1
I'm stuck in a problem in a project for a client that requires images took on device (using the camera) to be sent to a WebService and afterwards be seen on any device using the app (both Android and iOS devices).
Titanium provides a Ti.Blob object (event.media) after taking a picture (which is not JSON serializable) and I need somehow to send this to the server. The server responds always a JSON object, thus this Blob must be somehow JSON serializable.
I've tried many ways without success:
1 - Base64Encode the Blob
var base64blob = Ti.Utils.base64encode(event.media);
Doesn't work, it stucks the app and throws a ASL exceeded maximum size error. I imagine that the image is too large to be base64encoded.
2 - Read the Blob into a Buffer
var blobStream = Ti.Stream.createStream({ source: event.media, mode: Ti.Stream.MODE_READ });
var buffer = Ti.createBuffer({ length: event.media.length });
var bytes = blobStream.read(buffer);
It works but I have no idea how can I transform this buffer holding the image contents into something that the server can return in a JSON object and later be transformed into an Image Blob again.
The server can't manage Ti.Blob objects or Ti.Buffer objects because, first of all, they are Titanium objects and the server is C# based, and second due to Ti.Blob and Ti.Buffer aren't JSON serializable, thus the JSON return doesn't work.
What I need is basically described in the imaginary example below:
var imageBlob = event.media;
var JSONSerializableImg = imageBlob.toJSON();
sendImageToServer(JSONSerializableImg);
var imgFromServer = getImageFromServer();
var imageBlob = imgFromServer.toBlob();
var imgView = createImageView({
image: imageBlob
});
I hope someone can help me with any conversion method possible.
Thank's
OK,
This is what I think you have to do. Looking at the API, this is very doable.
1: You need to create an object Server side that will hold the BLOB.
public class BlobContainer
{
public string fileName{get;set;}
//... (Other properties)
public byte[] data {get;set;}
}
2: Convert the important information from the BLOB into a binary array and send to server.
var blobStream = Ti.Stream.createStream({ source: myBlob, mode: Ti.Stream.MODE_READ });
var newBuffer = Ti.createBuffer({ length: myBlob.length });
var bytes = blobStream.read(newBuffer);
3: Then send the byte data to the server through Ajax requests. Be mindful of how big your array is that you are sending. It might be advantageous to split the array up and combine it on the other side (Might not be necessary):
var dataObjects: [
{ id: 1, data: [BYTE_DATA_PART] },
{ id: 2, data: [BYTE_DATA_PART] }...
]
$.each(dataObjects, function(i,a) {
$.ajax({ url: "BLA", data: JSON.stringify(a), dataType: "json", type: "POST",
success: function() { //CONTINUE\\ },
error: function() {alert("ERROR BRO"})
});
});
4: Then server side get each request in your little blob container, store in a session object or cache object and once you have N out of N, piece it all together and store that sucker in the Database.
5: Retrieve the stuff in the reverse order. Just remember that it is stored as byte[] data. You may have to fuddle with it and store it as a string because of the way the TI buffer creates bytes and the way c# interprets bytes. Best thing is trial and error. Once you have all the pieces back on the client.
var newBuffer = Ti.Stream.read(data, 0, data.length);
var newBlob = newBuffer.toBlob();
To send and receive binary data to and from a server it's best to use Ti.Network.HTTPClient which can send and receive binary data.
There's a guide on uploading and downloading files here:
http://docs.appcelerator.com/platform/latest/#!/guide/File_Uploads_and_Downloads
JSON isn't designed to carry binary data, although base64encoded binary data should work. This is what Ti.Utils.base64encode() indeed is for. If you believe the "ASL exceeded maximum size error" you get shouldn't happen, please create a ticket on Appcelerator JIRA
I solved this issue by creating a separate method on server-side specially to upload photos. I followed the link below for server side:
PHP code
In Titanium I had to set XHR's header like this:
this.xhr.setRequestHeader("ContentType", "image/png");
this.xhr.setRequestHeader('enctype', 'multipart/form-data');
That's it!
Thank's for all the answers.
I've used the following method before (not in Titanium, but another web-based mobile app platform).
function convertToDataURLviaCanvas(url, callback, outputFormat){
var img = new Image();
img.crossOrigin = 'Anonymous';
img.onload = function(){
var canvas = document.createElement('CANVAS');
var ctx = canvas.getContext('2d');
var dataURL;
canvas.height = this.height;
canvas.width = this.width;
ctx.drawImage(this, 0, 0);
dataURL = canvas.toDataURL(outputFormat);
callback(dataURL);
canvas = null;
};
img.src = url;
}
convertToDataURLviaCanvas('http://example.com/image.png', function(base64Img){
// Base64DataURL
});
I used this to send the base64 encoded image string as JSON to my backend server. Then re-encoded the image on the server. It worked for me, but the base64 encoded string is huge.
Related
in a Node.js with Socket.io project, i get an image via Socket.io like this:
socket.on('newImage', function (data) {
var desc = data.description;
var image = data.img; //i want file size of this
}
in this code, image variable contains image binary data, i want to detect file size of that image. how?
Depending on how the data is encoded you can retrieve the byte length by using the byteLength method:
var encoding = 'binary';
var data = new Buffer('hello world', encoding);
Buffer.byteLength(data, encoding); // 11
i'm not 100% sure but from what i read when i send a blob (binary data) over websocket, the blob does not contain any file information. (Also the official specification states that wesockets only send the raw binary)
the filesize
the mimetype
user info (explain later)
i'm using https://github.com/websockets/ws
Testing:
Sending directly the blob from an input file.
ws.send(this.files[0]) //this should already contain the info
Creating a new blob with the native javascript api from file setting the proper mimetype.
ws.send(new Blob([this.files[0]],{type:this.files[0].type})); //also this
on both sides you can get only the effective blob without any other information.
Is it possible to append let's say a 4kb predefined json data converted also to binary that contains important information like the mimetype and the filesize,
and then just split off the 4kb when needed?
{"mime":"txt/plain","size":345}____________4KB_REST_OF_THE_BINARY
OR
ws.send({"mime":"txt\/plain","size":345})
ws.send(this.files[0])
Even if the first one is the worst solution ever it would allow me to send everything in one time.
The second one has a big problem:
it's a chat that allows to send also files like documents,images,music videos.
i could write some sort of handshaking system when sending the file/user info before i send the binary data.
BUT
if another person sends also a file, as it's async, the handshaking system has no chance to determine wich file is the right one for the correct user and mimetype.
So how do you properly send a binary file in a multiuser async envoirement?
i know i can convert to base64 but thats 30% bigger.
btw. Totally disappointed with Apple... while chrome shows every binary data properly, my ios devices are not able to handle blob's, only images will show in blob or base64 format, not even a simple txt file. Basically only a <img> tag can read dynamic files.
How everything works (now):
user sends a file
nodejs gets the binary data, also user info... but not mimetype,filename,size.
nodejs broadcasts the raw binary file to all the users.(can't specify user & file info)
clients create a bloburl (who send that? XD).
EDIT
what i have now:
client 1 (sends a file)CHROME
fileInput.addEventListener('change',function(e){
var file=this.files[0];
ws.send(new Blob([file],{
type:file.type //<- SET MIMETYPE
}));
//file.size
},false);
note: file is already a blob ... but this is how you would normally create a new blob specifying the mimetype.
server (broadcasts the binary data to the other clients)NODEJS
aaaaaand the mimetype is gone...
ws.addListener('message',function(binary){
var b=0,c=wss.clients.length;
while(b<c){
wss.clients[b++].send(binary)
}
});
client 2 (recieves the binary)CHROME
ws.addEventListener('message',function(msg){
var blob=new Blob([msg.data],{
type:'application/octet-stream' //<- LOST
});
var file=window.URL.createObjectURL(blob);
},false);
note: m.data is already a blob ... but this is how you would normally create a new blob specifying the mimetype witch is lost.
In client 2 i need the mimetype and naturally i also need the info about the user, wich can be retrieved from client 1 or the server (not a good choice)...
You're a bit out of luck with this because Node doesn't support the Blob interface and so any data you send or receive in Binary with Node is just Binary. You would have to have something that knew how to interpret a Blob object.
Here's an idea, and let me know if this works. Reading through the documentation for websockets\ws it says it supports sending and receiving ArrayBuffers. Which means you can use TypedArrays.
Here's where it gets nasty. You set a certain fixed n number of bytes at the beginning of every TypedArray to signal the mime type encoded in utf8 or what have you, and the rest of your TypedArray contains your file's bytes.
I would recommend using UInt8Array because utf8 characters are 8 bits long and your text will probably be readable when encoded that way. As for the file bits you'll probably just end up writing those down somewhere and appending an ending to it.
Also note, this method of interpretation works both ways whether from Node or in the Browser.
This solution is really just a form of type casting and you might get some unexpected results. The fixed length of your mime type field is crucial.
Here it is illustrated. Copy, paste, set the image file to whatever you want and then run that. You'll see the mime type I set pop out.
var fs = require('fs');
//https://stackoverflow.com/questions/8609289/convert-a-binary-nodejs-buffer-to-javascript-arraybuffer
function toUint8Array(buffer) {
var ab = new ArrayBuffer(buffer.length);
var array = new Uint8Array(ab);
for (var i = 0; i < buffer.length; ++i) {
array[i] = buffer[i];
}
return array;
}
//data is a raw Buffer object
fs.readFile('./ducklings.png', function (err, data) {
var mime = new Buffer('image/png');
var allBuffed = Buffer.concat([mime, data]);
var array = toUint8Array(allBuffed);
var mimeBytes = array.subarray(0,9); //number of characters in mime Buffer
console.log(String.fromCharCode.apply(null, mimeBytes));
});
Here's how you do it on the client side:
SOLUTION A: GET A PACKAGE
Get buffer, an implementation of Node's Buffer API for browsers. The solution to concatenate Byte buffers will work exactly as before. You can append fields like To: and what not as well. The way you format your headers in order to best serve your clients will be an evolving process I'm sure.
SOLUTION B: OLD SCHOOL
STEP 1: Convert your Blob to an ArrayBuffer
Notes: How to convert a String to an ArrayBuffer
var fr = new FileReader();
fr.addEventListener('loadend', function () {
//Asynchronous action in part 2.
var message = concatenateBuffers(headerStringAsBuffer, fr.result);
ws.send(message);
});
fr.readAsArrayBuffer(blob);
STEP 2: Concatenate ArrayBuffers
function concatenateBuffers(buffA, buffB) {
var byteLength = buffA.byteLength + buffB.byteLength;
var resultBuffer = new ArrayBuffer(byteLength);
//wrap ArrayBuffer in a typedArray/view
var resultView = new Uint8Array(resultBuffer);
var viewA = new Uint8Array(resultBuffer);
var viewB = new Uint8Array(resultBuffer);
//Copy 8 bit integers AKA Bytes
resultView.set(viewA);
resultView.set(viewB, viewA.byteLength);
return resultView.buffer
}
STEP 3: Receive and Reblob
I'm not going to repeat how to convert the concatenated String bytes back into a string because I've done it in the server example, but for turning the file bytes into a blob of your mime type is fairly simple.
new Blob(buffer.slice(offset, buffer.byteLength), {type: mimetype});
This Gist by robnyman goes into further details on how you would use an image transmitted via XHR, put it into localstorage, and use it in an image tag on your page.
I liked #Breedly's idea of prepending a fixed length byte array to indicate mime type of the ArrayBuffer so I created this npm package that I use when dealing with websockets but maybe others' might find it useful.
Example usage
const {
arrayBufferWithMime,
arrayBufferMimeDecouple
} = require('arraybuffer-mime')
// some image array buffer
const uint8 = new Uint8Array(1)
uint8[0] = 1
const ab = uint8.buffer
const mime = 'image/png'
const abWithMime = arrayBufferWithMime(ab, mime)
const {mime, arrayBuffer} = arrayBufferMimeDecouple(abWithMime)
console.log(mime) // "image/png"
console.log(arrayBuffer) // ArrayBuffer
I'm trying to decompress zlib'ed XML such as the following:
https://drive.google.com/file/d/0B52P0MZLTdw8ZzQwQzVpZGZVZWc
Uploading to online decompress services works, such as: http://i-tools.org/gzip
In PHP, I'm using this code and works just fine, I get the XML string:
$raw = file_get_contents("file_here");
$uncompressed = zlib_decode($raw);
However, I want to do this in JavaScript.
The app is a client-side Chrome extension which uses chrome.devtools.network that reads from the network logs
Reads binary responses. Example at Google Drive link at the top
JS needs to decompress that response to its original XML and parsed afterwards into object
The only problem I have is the zlib decompress part.
As of Latest Update, the Decompression Libraries work but unpacking doesn't. Please skip to the Update Sept 16 at the bottom.
I have already tried several JavaScript libraries and still cannot make it work:
Pako: https://github.com/nodeca/pako
unpack() code: https://codereview.stackexchange.com/questions/3569/pack-and-unpack-bytes-to-strings
function unpack(str) {
var bytes = [];
for(var i = 0, n = str.length; i < n; i++) {
var char = str.charCodeAt(i);
bytes.push(char >>> 8, char & 0xFF);
}
return bytes;
}
$.get("file_here", function(response){
var charData = unpack(response);
var binData = new Uint8Array(charData);
var data = pako.inflate(binData);
var strData = String.fromCharCode.apply(null, new Uint16Array(data));
console.log(strData);
});
Error: Uncaught incorrect header check
It's the same even placing the response elsewhere:
new Uint8Array(response);
pako.inflate(response);
Imaya's zlib: https://github.com/imaya/zlib.js
$.get("file_here", function(response){
var inflate = new Zlib.Inflate(response);
var output = inflate.decompress();
console.log(output);
});
Error: Uncaught Error: unsupported compression method inflate.js:60
Still using Imaya's zlib, combining with this Stack Overflow question:
Decompress gzip and zlib string in javascript
$.get("file_here", function(response){
var response = response.split('').map(function(e) {
return e.charCodeAt(0);
});
var inflate = new Zlib.Inflate(response);
var output = inflate.decompress();
console.log(output);
});
Error: Uncaught Error: invalid fcheck flag:29 inflate.js:65
dankogai's js-deflate: https://github.com/dankogai/js-deflate
console.log(RawDeflate.inflate(response));
Output: empty
augustl's js-inflate: https://github.com/augustl/js-inflate
console.log(JSInflate.inflate(response));
Output: empty
zlib-browserify: https://github.com/brianloveswords/zlib-browserify
Error: ReferenceError: exports is not defined
This is just a wrapper for Imaya's zlib. I think this is requireJS? I'm not even sure how to use it. Can it even be used without installing anything and just jQuery/JS? The app as mentioned is downloadable Chrome extension with just HTML importing JS files.
UPDATE Sept 16, 2014
It seems the problem is with the JavaScript unpack( ) function. When I use the ByteArray generated by PHP: http://pastebin.com/uDWvK94B, the JavaScript decompression functions work.
PHP unpacking that works:
$unpacked = unpack("C*", $raw);
For the JavaScript unpack( ) code that I use, which doesn't work, see top of the post under Pako section.
So the new question is, why does JavaScript generate a different ByteArray values than the one generated by PHP.
Is it really a problem with the unpack( ) function?
or is it something when the JS fetches the file, the encoding or whatsoever changes thus bytes get messed up?
and lastly, what is your suggested fix?
UPDATE Sept 20, 2014
With more research and some of the answers here giving leads
Sebastian S opening the idea that the problem was in the manner of retrieving data and it had something to do with text encodings
user3995789 providing an example that it will work even without the unpack( ) function, though outside the context of Chrome extensions
Isaac providing examples in the context of Chrome extensions, but still does not work
With that I researched further combining all leads which lead me to a theory that the reason behind all this is that Chrome is unable to get "raw" data through its request.getContent function. See here for the Chrome documentation for the said function.
As of now, I have taken the issue to Chrome, see here.
UPDATE March 24, 2015
Although the problem was not fully resolved, the answer which I think was the most useful to me was from #Sebastian S, who proposed that "the way" I was taking or receiving the data was at fault and a bad conversion was the cause, which is as near as the problem was.
Jquery reads in utf8 format, you have to read the raw file, this function will work.
function readTextFile(file)
{
var rawFile = new XMLHttpRequest();
rawFile.open('GET', file, true);
rawFile.responseType = 'arraybuffer';
rawFile.onload = function (response)
{
var words = new Uint8Array(rawFile.response);
console.log(words[1]);
console.log(pako.ungzip(words));
};
rawFile.send();
}
For more information see this answer.
I understood that you wanna use the zlib decompression inside a chrome extension while reading reponses bodies from the network log.
You need first to retrieve the base64 who will be decompressed. You can achieve this while using the getContent method.
function zlibDecompress(base64Content){
// var base64Content = base64Content.split(',')[1]; // Not sure if need to keep it
// Decode base64 (convert ascii to binary)
var strData = atob(base64Content);
// Convert binary string to character-number array
var charData = strData.split('').map(function(x){return x.charCodeAt(0);});
// Turn number array into byte-array
var binData = new Uint8Array(charData);
// Pako inflate
var data = pako.inflate(binData, { to: 'string' });
return data;
}
chrome.devtools.network.onRequestFinished.addListener(
function(request) {
request.getContent(
function(content, encoding){
if(encoding == 'base64'){
var output = zlibDecompress(content);
}
}
);
}
);
https://developer.chrome.com/extensions/devtools_network#type-Request
Using XMLHttpRequest :
<script type="text/javascript" src="pako.js"></script>
<script type="text/javascript">
function zlibDecompress(url){
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.responseType = 'blob';
xhr.onload = function(oEvent) {
// Base64 encode
var reader = new window.FileReader();
reader.readAsDataURL(xhr.response);
reader.onloadend = function() {
base64data = reader.result;
var base64 = base64data.split(',')[1];
// Decode base64 (convert ascii to binary)
var strData = atob(base64);
// Convert binary string to character-number array
var charData = strData.split('').map(function(x){return x.charCodeAt(0);});
// Turn number array into byte-array
var binData = new Uint8Array(charData);
// Pako inflate
var data = pako.inflate(binData, { to: 'string' });
console.log(data);
}
};
xhr.send();
}
zlibDecompress('fileurl');
</script>
If you wanna use XMLHttpRequest with chrome extension
{
"name": "My extension",
...
"permissions": [
"http://www.domain.com/", // The domain that hold the file
"http://*/" // Or every domain
],
...
}
https://developer.chrome.com/extensions/xhr
Feel free to ask if you have any questions ;)
In my opinion the question you should really be asking is: How do you retrieve the compressed data? As soon as it becomes an UTF-16 string, the trouble begins. I'm not even sure, if the conversion from raw byte data to javascript strings is lossless.
As you wrote something about php, I assume you're communicating to some sort of backend. If this is true, there are options to handle binary data with native means. Maybe this can help you: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data
I want to send an image the user selected from their machine along with form data all wrapped up in a JSON object and sent to server. I am using Node for the server. Is it possible to place the image in the JSON object along with the other form elements and read in Node?
The common ways I encountered are using the Base64 string approach: you encode your image into a Base64 string and set it as part of the JSON Object that you send over to your server.
Another approach seems to be using the Binary Data in JSON but I'd never tried this before so not much info from me.
Here's a code sample to do a Base64 encoding in Javascript. Specifically look for the method below
function getBase64Image(imgElem) {
// imgElem must be on the same server otherwise a cross-origin error will be thrown "SECURITY_ERR: DOM Exception 18"
var canvas = document.createElement("canvas");
canvas.width = imgElem.clientWidth;
canvas.height = imgElem.clientHeight;
var ctx = canvas.getContext("2d");
ctx.drawImage(imgElem, 0, 0);
var dataURL = canvas.toDataURL("image/png");
return dataURL.replace(/^data:image\/(png|jpg);base64,/, "");
}
There is a way to achieve this: use image data.
In Javascript, on client side, the FileReader will allow you to read binary image data, and get them into a base64 encoded string.
On client side:
var file = $('.file-upload > input')[0].files[0];
function upload(file) {
var reader = new FileReader();
// when image data was read
reader.onload = function(event) {
// I usually remove the prefix to only keep data, but it depends on your server
var data = event.target.result.replace("data:"+ file.type +";base64,", '');
// make here your ajax call
$.ajax({
url: 'and_so_on',
json: {
data: data
}
});
// read data from file
reader.readAsDataURL(file);
}
On server side, you will received base64 encoded image that you can easilly translate into binary with the Buffer constructor
var buffer = new Buffer(data, 'base64');
Be warned that FileReader is not supported by all browsers
I'm attempting to consume server-side code that is owned by another team and that I can't easily change. It is processing an image and returning it via Response.BinaryWrite:
MemoryStream ms = new MemoryStream();
image.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
var imageToReturn = ms.ToArray();
Response.ContentType = "image/jpg";
Response.BinaryWrite(imageToReturn);
However, when I attempt to do standard client-side processing of the result, like using Javascript's btoa() to convert it from a byte array to an ArrayBuffer, I get messages like "'btoa' failed: The string to be encoded contains characters outside of the Latin1 range".
I really just want to be able to display and work with this image - so any approach that would get it to appear in a canvas, or convert it to a data URL, etc., would help me out. Am I missing something?
If you just want to display the image, why don't you just src attribute of img tag to the url of the ASP.Net page, which is writing the JPG in the in the response.
If you want to display the image in canvas, you can do it in following way
myimage = new Image();
myimage.onload = function () {
var canvas = document.getElementById('canv');
var ctx = canvas.getContext('2d');
ctx.drawImage(myimage, x, y);
}
myimage.src = '<url to the asp.net page>';