I'm trying to convert a blob (created with zip.js) to a base64 and persist it in the websql database. Then I would also like to do this process the other way around. Anyway, my test code (without the compression) looks something like:
var blob = new Blob([data], {
type : "text/plain"
});
blobToBase64(blob, function(b64) { // convert BLOB to BASE64
var newBlob = base64ToBlob(b64) ; // convert BASE64 to BLOB
console.log(blob.size + " != " + newBlob.size) ;
});
see a working example: http://jsfiddle.net/jeanluca/4bn5G/
So, the strange thing is, that it works in Chrome, but not in Safari (als not on my iPad).
I also tried to rewrite the base64ToBlob to
function base64ToBlob(base64) {
var binary = atob(base64);
return new Blob([binary]) ;
}
But then de uncompress doesn't work anymore, giving me an "IndexSizeError: DOM Exception 1 " exception
Any suggestion what might be wrong in my code ?
Thnx
Well I found a solution just after posting my comment.
Instead of
new Blob([data]);
do
new Blob([data.buffer]);
notice the addition of ".buffer"
Related
I have three failing versions of the following code in a chrome extension, which attempts to intercept a click to a link pointing to a pdf file, fetch that file, convert it to base64, and then log it. But I'm afraid I don't really know anything about binary formats and encodings, so I'm royally sucking this up.
var links = document.getElementsByTagName("a");
function transform(blob) {
return btoa(String.fromCharCode.apply(null, new Uint8Array(blob)));
};
function getlink(link) {
var x = new XMLHttpRequest();
x.open("GET", link, true);
x.responseType = 'blob';
x.onload = function(e) {
console.log("Raw response:");
console.log(x.response);
console.log("Direct transformation:");
console.log(btoa(x.response));
console.log("Mysterious thing I got from SO:");
console.log(transform(x.response));
window.location.href = link;
};
x.onerror = function (e) {
console.error(x.statusText);
};
x.send(null);
};
for (i = 0, len = links.length; i < len; i++) {
var l = links[i]
l.addEventListener("click", function(e) {
e.preventDefault();
e.stopPropagation();
e.stopImmediatePropagation();
getlink(this.href);
}, false);
};
Version 1 doesn't have the call to x.responseType, or the call to transform. It was my original, naive, implementation. It threw an error: "The string to be encoded contains characters outside of the Latin1 range."
After googling that error, I found this prior SO, which suggests that in parsing an image:
The response type needs to be set to blob. So this code does that.
There's some weird line, I don't know what it does at all: String.fromCharCode.apply(null, new Uint8Array(blob)).
Because I know nothing about binary formats, I guessed, probably stupidly, that making a PDF base64 would be the same as making some random image format base64. So, in fine SO tradition, I copied code that I don't really understand. In stages.
Version 2 of the code just set the response type to blob but didn't try the second transformation. And the code worked, and logged something that looked like a base64 string, but a clearly incorrect string. In its entirety, it logged:
W29iamVjdCBCbG9iXQ==
Which is just goofily wrong. It's obviously too short for a 46k pdf file, and a reference base64 encoding I created with python from the commandline was much much much longer, as one would expect.
Version 3 of the code then also applies the mysterious transformation using stringFromCharCode and all the rest, which I shoved into the transform function.
However, that doesn't log anything at all---a blank line appears in the console in its appropriate place. No errors, no nonsense output, just a blank line.
I know I'm getting the correct file from prior testing. Also, the call to log the raw response object produces Blob {size: 45587, type: "application/pdf"}, which is the correct filesize for the pdf I'm experimenting with, so the blob actually contains what it should when it gets into the browser.
I'm using, and only need to support, a current version of chrome.
Can someone tell me what I'm doing wrong?
Thanks!
If you only need to support modern browsers, you should also be able to use FileReader#readAsDataURL.
That would let you do something like this:
var reader = new FileReader();
reader.addEventListener("load", function () {
console.log(reader.result);
}, false);
// The function accepts Blobs and Files
reader.readAsDataURL(x.response);
This logs a data URI, which will contain your base64 data.
I think I've found my own solution. The response type needs to be arraybuffer not blob.
We transform HTML to PDF in the backend (PHP) using dompdf. The generated output from dompdf is Base64 encoded with
$output = $dompdf->output();
base64_encode($output);
This Base64 encoded content is saved as a file on the server. When we decode this file content like this:
cat /tmp/55acbaa9600f4 | base64 -D > test.pdf
we get a proper PDF file.
But when we transfer the Base64 content to the client as a string value inside a JSON object (the server provides a RESTful API...):
{
"file_data": "...the base64 string..."
}
And decode it with atob() and then create a Blob object to download the file later on, the PDF is always "empty"/broken.
$scope.downloadFileData = function(doc) {
DocumentService.getFileData(doc).then(function(data) {
var decodedFileData = atob(data.file_data);
var file = new Blob([decodedFileData], { type: doc.file_type });
saveAs(file, doc.title + '.' + doc.extension);
});
};
When we log the decoded content, it seems that the content is "broken", because several symbols are not the same as when we decode the content on the server using base64 -D.
When we encode/decode the content of simple text/plain documents, it's working as expected. But all binary (or not ASCII formats) are not working.
We have searched the web for many hours, but didn't find a solution for this that works for us. Does anyone have the same problem and can provide us with a working solution? Thanks in advance!
This is a example for a on the server Base64 encoded content of a PDF document:
JVBERi0xLjMKMSAwIG9iago8PCAvVHlwZSAvQ2F0YWxvZwovT3V0bGluZXMgMiAwIFIKL1BhZ2VzIDMgMCBSID4+CmVuZG9iagoyIDAgb2JqCjw8IC9UeXBlIC9PdXRsaW5lcyAvQ291bnQgMCA+PgplbmRvYmoKMyAwIG9iago8PCAvVHlwZSAvUGFnZXMKL0tpZHMgWzYgMCBSCl0KL0NvdW50IDEKL1Jlc291cmNlcyA8PAovUHJvY1NldCA0IDAgUgovRm9udCA8PCAKL0YxIDggMCBSCj4+Cj4+Ci9NZWRpYUJveCBbMC4wMDAgMC4wMDAgNjEyLjAwMCA3OTIuMDAwXQogPj4KZW5kb2JqCjQgMCBvYmoKWy9QREYgL1RleHQgXQplbmRvYmoKNSAwIG9iago8PAovQ3JlYXRvciAoRE9NUERGKQovQ3JlYXRpb25EYXRlIChEOjIwMTUwNzIwMTMzMzIzKzAyJzAwJykKL01vZERhdGUgKEQ6MjAxNTA3MjAxMzMzMjMrMDInMDAnKQo+PgplbmRvYmoKNiAwIG9iago8PCAvVHlwZSAvUGFnZQovUGFyZW50IDMgMCBSCi9Db250ZW50cyA3IDAgUgo+PgplbmRvYmoKNyAwIG9iago8PCAvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDY2ID4+CnN0cmVhbQp4nOMy0DMwMFBAJovSuZxCFIxN9AwMzRTMDS31DCxNFUJSFPTdDBWMgKIKIWkKCtEaIanFJZqxCiFeCq4hAO4PD0MKZW5kc3RyZWFtCmVuZG9iago4IDAgb2JqCjw8IC9UeXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMQovTmFtZSAvRjEKL0Jhc2VGb250IC9UaW1lcy1Cb2xkCi9FbmNvZGluZyAvV2luQW5zaUVuY29kaW5nCj4+CmVuZG9iagp4cmVmCjAgOQowMDAwMDAwMDAwIDY1NTM1IGYgCjAwMDAwMDAwMDggMDAwMDAgbiAKMDAwMDAwMDA3MyAwMDAwMCBuIAowMDAwMDAwMTE5IDAwMDAwIG4gCjAwMDAwMDAyNzMgMDAwMDAgbiAKMDAwMDAwMDMwMiAwMDAwMCBuIAowMDAwMDAwNDE2IDAwMDAwIG4gCjAwMDAwMDA0NzkgMDAwMDAgbiAKMDAwMDAwMDYxNiAwMDAwMCBuIAp0cmFpbGVyCjw8Ci9TaXplIDkKL1Jvb3QgMSAwIFIKL0luZm8gNSAwIFIKPj4Kc3RhcnR4cmVmCjcyNQolJUVPRgo=
If you atob() this, you don't get the same result as on the console with base64 -D. Why?
Your issue looks identical to the one I needed to solve recently.
Here is what worked for me:
const binaryImg = atob(base64String);
const length = binaryImg.length;
const arrayBuffer = new ArrayBuffer(length);
const uintArray = new Uint8Array(arrayBuffer);
for (let i = 0; i < length; i++) {
uintArray[i] = binaryImg.charCodeAt(i);
}
const fileBlob = new Blob([uintArray], { type: 'application/pdf' });
saveAs(fileBlob, 'filename.pdf');
It seems that only doing a base64 decode is not enough...you need to put the result into a Uint8Array. Otherwise, the pdf pages appear blank.
I found this solution here:
https://github.com/sayanee/angularjs-pdf/issues/110#issuecomment-579988190
You can use btoa() and atob() work in some browsers:
For Exa.
var enc = btoa("this is some text");
alert(enc);
alert(atob(enc));
To JSON and base64 are completely independent.
Here's a JSON stringifier/parser (and direct GitHub link).
Here's a base64 Q&A. Here's another one.
I'm trying to decompress zlib'ed XML such as the following:
https://drive.google.com/file/d/0B52P0MZLTdw8ZzQwQzVpZGZVZWc
Uploading to online decompress services works, such as: http://i-tools.org/gzip
In PHP, I'm using this code and works just fine, I get the XML string:
$raw = file_get_contents("file_here");
$uncompressed = zlib_decode($raw);
However, I want to do this in JavaScript.
The app is a client-side Chrome extension which uses chrome.devtools.network that reads from the network logs
Reads binary responses. Example at Google Drive link at the top
JS needs to decompress that response to its original XML and parsed afterwards into object
The only problem I have is the zlib decompress part.
As of Latest Update, the Decompression Libraries work but unpacking doesn't. Please skip to the Update Sept 16 at the bottom.
I have already tried several JavaScript libraries and still cannot make it work:
Pako: https://github.com/nodeca/pako
unpack() code: https://codereview.stackexchange.com/questions/3569/pack-and-unpack-bytes-to-strings
function unpack(str) {
var bytes = [];
for(var i = 0, n = str.length; i < n; i++) {
var char = str.charCodeAt(i);
bytes.push(char >>> 8, char & 0xFF);
}
return bytes;
}
$.get("file_here", function(response){
var charData = unpack(response);
var binData = new Uint8Array(charData);
var data = pako.inflate(binData);
var strData = String.fromCharCode.apply(null, new Uint16Array(data));
console.log(strData);
});
Error: Uncaught incorrect header check
It's the same even placing the response elsewhere:
new Uint8Array(response);
pako.inflate(response);
Imaya's zlib: https://github.com/imaya/zlib.js
$.get("file_here", function(response){
var inflate = new Zlib.Inflate(response);
var output = inflate.decompress();
console.log(output);
});
Error: Uncaught Error: unsupported compression method inflate.js:60
Still using Imaya's zlib, combining with this Stack Overflow question:
Decompress gzip and zlib string in javascript
$.get("file_here", function(response){
var response = response.split('').map(function(e) {
return e.charCodeAt(0);
});
var inflate = new Zlib.Inflate(response);
var output = inflate.decompress();
console.log(output);
});
Error: Uncaught Error: invalid fcheck flag:29 inflate.js:65
dankogai's js-deflate: https://github.com/dankogai/js-deflate
console.log(RawDeflate.inflate(response));
Output: empty
augustl's js-inflate: https://github.com/augustl/js-inflate
console.log(JSInflate.inflate(response));
Output: empty
zlib-browserify: https://github.com/brianloveswords/zlib-browserify
Error: ReferenceError: exports is not defined
This is just a wrapper for Imaya's zlib. I think this is requireJS? I'm not even sure how to use it. Can it even be used without installing anything and just jQuery/JS? The app as mentioned is downloadable Chrome extension with just HTML importing JS files.
UPDATE Sept 16, 2014
It seems the problem is with the JavaScript unpack( ) function. When I use the ByteArray generated by PHP: http://pastebin.com/uDWvK94B, the JavaScript decompression functions work.
PHP unpacking that works:
$unpacked = unpack("C*", $raw);
For the JavaScript unpack( ) code that I use, which doesn't work, see top of the post under Pako section.
So the new question is, why does JavaScript generate a different ByteArray values than the one generated by PHP.
Is it really a problem with the unpack( ) function?
or is it something when the JS fetches the file, the encoding or whatsoever changes thus bytes get messed up?
and lastly, what is your suggested fix?
UPDATE Sept 20, 2014
With more research and some of the answers here giving leads
Sebastian S opening the idea that the problem was in the manner of retrieving data and it had something to do with text encodings
user3995789 providing an example that it will work even without the unpack( ) function, though outside the context of Chrome extensions
Isaac providing examples in the context of Chrome extensions, but still does not work
With that I researched further combining all leads which lead me to a theory that the reason behind all this is that Chrome is unable to get "raw" data through its request.getContent function. See here for the Chrome documentation for the said function.
As of now, I have taken the issue to Chrome, see here.
UPDATE March 24, 2015
Although the problem was not fully resolved, the answer which I think was the most useful to me was from #Sebastian S, who proposed that "the way" I was taking or receiving the data was at fault and a bad conversion was the cause, which is as near as the problem was.
Jquery reads in utf8 format, you have to read the raw file, this function will work.
function readTextFile(file)
{
var rawFile = new XMLHttpRequest();
rawFile.open('GET', file, true);
rawFile.responseType = 'arraybuffer';
rawFile.onload = function (response)
{
var words = new Uint8Array(rawFile.response);
console.log(words[1]);
console.log(pako.ungzip(words));
};
rawFile.send();
}
For more information see this answer.
I understood that you wanna use the zlib decompression inside a chrome extension while reading reponses bodies from the network log.
You need first to retrieve the base64 who will be decompressed. You can achieve this while using the getContent method.
function zlibDecompress(base64Content){
// var base64Content = base64Content.split(',')[1]; // Not sure if need to keep it
// Decode base64 (convert ascii to binary)
var strData = atob(base64Content);
// Convert binary string to character-number array
var charData = strData.split('').map(function(x){return x.charCodeAt(0);});
// Turn number array into byte-array
var binData = new Uint8Array(charData);
// Pako inflate
var data = pako.inflate(binData, { to: 'string' });
return data;
}
chrome.devtools.network.onRequestFinished.addListener(
function(request) {
request.getContent(
function(content, encoding){
if(encoding == 'base64'){
var output = zlibDecompress(content);
}
}
);
}
);
https://developer.chrome.com/extensions/devtools_network#type-Request
Using XMLHttpRequest :
<script type="text/javascript" src="pako.js"></script>
<script type="text/javascript">
function zlibDecompress(url){
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.responseType = 'blob';
xhr.onload = function(oEvent) {
// Base64 encode
var reader = new window.FileReader();
reader.readAsDataURL(xhr.response);
reader.onloadend = function() {
base64data = reader.result;
var base64 = base64data.split(',')[1];
// Decode base64 (convert ascii to binary)
var strData = atob(base64);
// Convert binary string to character-number array
var charData = strData.split('').map(function(x){return x.charCodeAt(0);});
// Turn number array into byte-array
var binData = new Uint8Array(charData);
// Pako inflate
var data = pako.inflate(binData, { to: 'string' });
console.log(data);
}
};
xhr.send();
}
zlibDecompress('fileurl');
</script>
If you wanna use XMLHttpRequest with chrome extension
{
"name": "My extension",
...
"permissions": [
"http://www.domain.com/", // The domain that hold the file
"http://*/" // Or every domain
],
...
}
https://developer.chrome.com/extensions/xhr
Feel free to ask if you have any questions ;)
In my opinion the question you should really be asking is: How do you retrieve the compressed data? As soon as it becomes an UTF-16 string, the trouble begins. I'm not even sure, if the conversion from raw byte data to javascript strings is lossless.
As you wrote something about php, I assume you're communicating to some sort of backend. If this is true, there are options to handle binary data with native means. Maybe this can help you: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data
I am generating a string through JavaScript and I need to download it to a text file with a predefined dynamic filename. This way there will be no room for error by employees.
This is obviously not possible in JavaScript due to security issues. However, from what I have read it should be possible with base64 encoding.
I managed to encode the string and open a url with the decoded data. The string has been decoded successfully in this URL. The format is as follows:
var data = 'data:text/plain;base64,'+L_EncodedData;
document.location = data;
I need to open a file dialog with the decoded data so the employees can download the content generated in this URL.
Any help?
Many thanks in advance
If you're still looking for an answer to this, check out my answer here. This is how I would adapt it for your needs.
// Convert the Base64 string back to text.
var txt = atob(data.reportBase64Bytes);
// Blob for saving.
var blob = new Blob([byteString], { type: "text/plain" });
// Tell the browser to save as report.txt.
saveAs(blob, "report.txt");
If you use this, make sure you grab the polyfills that I mention in the other post.
This block is fixed.
window.OpenWindowForBase64 = function(url, callback) {
var image = new Image();
image.src = url;
var w = window.open("");
w.document.write(image.outerHTML);
if (callback) {
callback(url);
}
}
I'm making an export function to a HTML5 game of mine and my current saving method is a crude serialization of game data and then:
// this is Javascript
var gameData = "abc"; // this is actually a HUGE string of over 2MB
try
{
document.location = "data:text/octet-stream,"+encodeURIComponent(JSON.stringify(gameData));
}
catch(e)
{
console.log(e);
}
From: Using HTML5/Javascript to generate and save a file
I don't mind the fact that I can't use it for big strings, but I'd like it to generate a warning that informs that this method doesn't work, unfortunately Chrome (16) crashes without catching that exception.
Is there a better way to implement this kind of export, the important thing being for me is to make it work locally. FileAPI would be a better solution, but doesn't work locally.
AFAIK not possible client-side; but a 1.99MB file can be saved this way in Chrome; maybe you should try to compress/optimize your game data a little bit. One way to do that is to use JSZip.
In order to check if the current browser is Google Chrome and so the method doesn't work with long strings you can use something like this:
if(gameData.length > 1999999 && window.chrome){
alert("Sorry, this export method does not work in Google Chrome")
return;
}
I assume that the json.stringify is working
But that the win.location fail due to the fact that the URI is to big.
What you can do is to convert your uri to a blob, and then
URL.createObjectURL(file)
This will create a url that point to an internal object (internal to the browser).
And it will work as expected.
Here is a a method to convert dataURI to blob
function(dataURI) {
// convert base64 to raw binary data held in a string
// doesn't handle URLEncoded DataURIs
var byteString;
if (dataURI.split(',')[0].indexOf('base64') >= 0)
byteString = atob(dataURI.split(',')[1]);
else
byteString = unescape(dataURI.split(',')[1]);
// separate out the mime component
var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];
// write the bytes of the string to an ArrayBuffer
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
// write the ArrayBuffer to a blob, and you're done
return new Blob([ab],{type: mimeString});
}
}