I'm trying to decompress zlib'ed XML such as the following:
https://drive.google.com/file/d/0B52P0MZLTdw8ZzQwQzVpZGZVZWc
Uploading to online decompress services works, such as: http://i-tools.org/gzip
In PHP, I'm using this code and works just fine, I get the XML string:
$raw = file_get_contents("file_here");
$uncompressed = zlib_decode($raw);
However, I want to do this in JavaScript.
The app is a client-side Chrome extension which uses chrome.devtools.network that reads from the network logs
Reads binary responses. Example at Google Drive link at the top
JS needs to decompress that response to its original XML and parsed afterwards into object
The only problem I have is the zlib decompress part.
As of Latest Update, the Decompression Libraries work but unpacking doesn't. Please skip to the Update Sept 16 at the bottom.
I have already tried several JavaScript libraries and still cannot make it work:
Pako: https://github.com/nodeca/pako
unpack() code: https://codereview.stackexchange.com/questions/3569/pack-and-unpack-bytes-to-strings
function unpack(str) {
var bytes = [];
for(var i = 0, n = str.length; i < n; i++) {
var char = str.charCodeAt(i);
bytes.push(char >>> 8, char & 0xFF);
}
return bytes;
}
$.get("file_here", function(response){
var charData = unpack(response);
var binData = new Uint8Array(charData);
var data = pako.inflate(binData);
var strData = String.fromCharCode.apply(null, new Uint16Array(data));
console.log(strData);
});
Error: Uncaught incorrect header check
It's the same even placing the response elsewhere:
new Uint8Array(response);
pako.inflate(response);
Imaya's zlib: https://github.com/imaya/zlib.js
$.get("file_here", function(response){
var inflate = new Zlib.Inflate(response);
var output = inflate.decompress();
console.log(output);
});
Error: Uncaught Error: unsupported compression method inflate.js:60
Still using Imaya's zlib, combining with this Stack Overflow question:
Decompress gzip and zlib string in javascript
$.get("file_here", function(response){
var response = response.split('').map(function(e) {
return e.charCodeAt(0);
});
var inflate = new Zlib.Inflate(response);
var output = inflate.decompress();
console.log(output);
});
Error: Uncaught Error: invalid fcheck flag:29 inflate.js:65
dankogai's js-deflate: https://github.com/dankogai/js-deflate
console.log(RawDeflate.inflate(response));
Output: empty
augustl's js-inflate: https://github.com/augustl/js-inflate
console.log(JSInflate.inflate(response));
Output: empty
zlib-browserify: https://github.com/brianloveswords/zlib-browserify
Error: ReferenceError: exports is not defined
This is just a wrapper for Imaya's zlib. I think this is requireJS? I'm not even sure how to use it. Can it even be used without installing anything and just jQuery/JS? The app as mentioned is downloadable Chrome extension with just HTML importing JS files.
UPDATE Sept 16, 2014
It seems the problem is with the JavaScript unpack( ) function. When I use the ByteArray generated by PHP: http://pastebin.com/uDWvK94B, the JavaScript decompression functions work.
PHP unpacking that works:
$unpacked = unpack("C*", $raw);
For the JavaScript unpack( ) code that I use, which doesn't work, see top of the post under Pako section.
So the new question is, why does JavaScript generate a different ByteArray values than the one generated by PHP.
Is it really a problem with the unpack( ) function?
or is it something when the JS fetches the file, the encoding or whatsoever changes thus bytes get messed up?
and lastly, what is your suggested fix?
UPDATE Sept 20, 2014
With more research and some of the answers here giving leads
Sebastian S opening the idea that the problem was in the manner of retrieving data and it had something to do with text encodings
user3995789 providing an example that it will work even without the unpack( ) function, though outside the context of Chrome extensions
Isaac providing examples in the context of Chrome extensions, but still does not work
With that I researched further combining all leads which lead me to a theory that the reason behind all this is that Chrome is unable to get "raw" data through its request.getContent function. See here for the Chrome documentation for the said function.
As of now, I have taken the issue to Chrome, see here.
UPDATE March 24, 2015
Although the problem was not fully resolved, the answer which I think was the most useful to me was from #Sebastian S, who proposed that "the way" I was taking or receiving the data was at fault and a bad conversion was the cause, which is as near as the problem was.
Jquery reads in utf8 format, you have to read the raw file, this function will work.
function readTextFile(file)
{
var rawFile = new XMLHttpRequest();
rawFile.open('GET', file, true);
rawFile.responseType = 'arraybuffer';
rawFile.onload = function (response)
{
var words = new Uint8Array(rawFile.response);
console.log(words[1]);
console.log(pako.ungzip(words));
};
rawFile.send();
}
For more information see this answer.
I understood that you wanna use the zlib decompression inside a chrome extension while reading reponses bodies from the network log.
You need first to retrieve the base64 who will be decompressed. You can achieve this while using the getContent method.
function zlibDecompress(base64Content){
// var base64Content = base64Content.split(',')[1]; // Not sure if need to keep it
// Decode base64 (convert ascii to binary)
var strData = atob(base64Content);
// Convert binary string to character-number array
var charData = strData.split('').map(function(x){return x.charCodeAt(0);});
// Turn number array into byte-array
var binData = new Uint8Array(charData);
// Pako inflate
var data = pako.inflate(binData, { to: 'string' });
return data;
}
chrome.devtools.network.onRequestFinished.addListener(
function(request) {
request.getContent(
function(content, encoding){
if(encoding == 'base64'){
var output = zlibDecompress(content);
}
}
);
}
);
https://developer.chrome.com/extensions/devtools_network#type-Request
Using XMLHttpRequest :
<script type="text/javascript" src="pako.js"></script>
<script type="text/javascript">
function zlibDecompress(url){
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.responseType = 'blob';
xhr.onload = function(oEvent) {
// Base64 encode
var reader = new window.FileReader();
reader.readAsDataURL(xhr.response);
reader.onloadend = function() {
base64data = reader.result;
var base64 = base64data.split(',')[1];
// Decode base64 (convert ascii to binary)
var strData = atob(base64);
// Convert binary string to character-number array
var charData = strData.split('').map(function(x){return x.charCodeAt(0);});
// Turn number array into byte-array
var binData = new Uint8Array(charData);
// Pako inflate
var data = pako.inflate(binData, { to: 'string' });
console.log(data);
}
};
xhr.send();
}
zlibDecompress('fileurl');
</script>
If you wanna use XMLHttpRequest with chrome extension
{
"name": "My extension",
...
"permissions": [
"http://www.domain.com/", // The domain that hold the file
"http://*/" // Or every domain
],
...
}
https://developer.chrome.com/extensions/xhr
Feel free to ask if you have any questions ;)
In my opinion the question you should really be asking is: How do you retrieve the compressed data? As soon as it becomes an UTF-16 string, the trouble begins. I'm not even sure, if the conversion from raw byte data to javascript strings is lossless.
As you wrote something about php, I assume you're communicating to some sort of backend. If this is true, there are options to handle binary data with native means. Maybe this can help you: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data
Related
I am trying to write a JXA script in Apple Script Editor that converts PNG files to base64 strings, which can then be added to a JSON object.
I cannot seem to find a JXA method that works for doing the base64 encoding /decoding part.
I came across a droplet which was written using Shell Script that outsources the task to openssl and then outputs a .b64 file:
for f in "$#"
do
openssl base64 -in "$f" -out "$f.b64"
done
So I was thinking of Frankenstein'ing this up to a method that uses evalAS to run inline AppleScript, per the example:
(() => {
'use strict';
// evalAS2 :: String -> IO a
const evalAS2 = s => {
const a = Application.currentApplication();
return (a.includeStandardAdditions = true, a)
.runScript(s);
};
return evalAS2(
'use scripting additions\n\
for f in' + '\x22' + file + '\x22\n'
do
openssl base64 -in "$f" -out "$f.b64"
done'
);
})();
And then re-opening the .b64 file in the script, but this all seems rather long-winded and clunky.
I know that it is possible to use Cocoa in JXA scripts, and I see that there are methods for base64 encoding/decoding in Cocoa...
As well as Objective-C:
NSData *imageData = UIImagePNGRepresentation(myImageView.image);
NSString * base64String = [imageData base64EncodedStringWithOptions:0];
The JXA Cookbook has a whole section going over Syntax for Calling ObjC functions, which I am trying to read over.
From what I understand, it should look something like:
var image_to_convert = $.NSData.alloc.UIImagePNGRepresentation(image)
var image_as_base64 = $.NSString.alloc.base64EncodedStringWithOptions(image_to_convert)
But I just am a total noob to this, so it is still difficult for me to understand it all.
In the speculative code above, I am not sure where I would get the image data from?
I am currently trying:
ObjC.import("Cocoa");
var image = $.NSImage.alloc.initWithContentsOfFile(file)
console.log(image);
var image_to_convert = $.NSData.alloc.UIImagePNGRepresentation(image)
var image_as_base64 = $.NSString.alloc.base64EncodedStringWithOptions(image_to_convert)
But it is resulting in the following errors:
$.NSData.alloc.UIImagePNGRepresentation is not a function. (In
'$.NSData.alloc.UIImagePNGRepresentation(image)',
'$.NSData.alloc.UIImagePNGRepresentation' is undefined)
I am guessing it is because UIImagePNGRepresentation is of the UIKit framework, which is an iOS thing and not OS X?
I came across this post, which suggests this:
NSArray *keys = [NSArray arrayWithObject:#"NSImageCompressionFactor"];
NSArray *objects = [NSArray arrayWithObject:#"1.0"];
NSDictionary *dictionary = [NSDictionary dictionaryWithObjects:objects forKeys:keys];
NSImage *image = [[NSImage alloc] initWithContentsOfFile:[imageField stringValue]];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:[image TIFFRepresentation]];
NSData *tiff_data = [imageRep representationUsingType:NSPNGFileType properties:dictionary];
NSString *base64 = [tiff_data encodeBase64WithNewlines:NO];
But again, I have no idea how this translates to JXA. I just am determined to get something working.
I was hoping that there was some way of just doing it in plain old JavaScript that will work in a JXA script?
I look forward to any answers and/or pointers that you might be able to provide. Thank you all in advance!
I'm sorry I never worked with JXA but a lot in Objective-C.
I think You are getting the compile errors, because You are trying to always allocate new Objects.
I think it should be the simply:
ObjC.import("Cocoa");
var imageData = $.NSData.alloc.initWithContentsOfFile(file);
console.log(imageData);
var image_as_base64 = imageData.base64EncodedStringWithOptions(0); // Call method of allocated object
0 is a constant for Base64 encodings to just get the base64 String.
edit:
var theString = ObjC.unwrap(image_as_base64);
This to make the value visible to JXA
Use below code. Read the file to var file from jquery file input element using FileReader in 'readDataAsURL'. Then you will have your png as a string in base64 format.
You may need to split the base64 string with ',' to get the actual data part of the string, which you can include in a JSON and send it to the backend via an API.
var file = $('#fileUpload').prop('files')[0];
var base64data;
var reader = new FileReader();
reader.readAsDataURL(file);
reader.onload = function() {
base64data = reader.result;
var dataUrl = base64data.split(",");
};
Usually the base64 string you will get be in this form.
'data:image/png;base64,STREAM_OF_SOME_CHARACTERS...
So the STREAM_OF_SOME_CHARACTERS...(dataUrl) is where actually the image data is in.
Furthermore you can open the image in a HTML page with
<img src=base64data>
I have three failing versions of the following code in a chrome extension, which attempts to intercept a click to a link pointing to a pdf file, fetch that file, convert it to base64, and then log it. But I'm afraid I don't really know anything about binary formats and encodings, so I'm royally sucking this up.
var links = document.getElementsByTagName("a");
function transform(blob) {
return btoa(String.fromCharCode.apply(null, new Uint8Array(blob)));
};
function getlink(link) {
var x = new XMLHttpRequest();
x.open("GET", link, true);
x.responseType = 'blob';
x.onload = function(e) {
console.log("Raw response:");
console.log(x.response);
console.log("Direct transformation:");
console.log(btoa(x.response));
console.log("Mysterious thing I got from SO:");
console.log(transform(x.response));
window.location.href = link;
};
x.onerror = function (e) {
console.error(x.statusText);
};
x.send(null);
};
for (i = 0, len = links.length; i < len; i++) {
var l = links[i]
l.addEventListener("click", function(e) {
e.preventDefault();
e.stopPropagation();
e.stopImmediatePropagation();
getlink(this.href);
}, false);
};
Version 1 doesn't have the call to x.responseType, or the call to transform. It was my original, naive, implementation. It threw an error: "The string to be encoded contains characters outside of the Latin1 range."
After googling that error, I found this prior SO, which suggests that in parsing an image:
The response type needs to be set to blob. So this code does that.
There's some weird line, I don't know what it does at all: String.fromCharCode.apply(null, new Uint8Array(blob)).
Because I know nothing about binary formats, I guessed, probably stupidly, that making a PDF base64 would be the same as making some random image format base64. So, in fine SO tradition, I copied code that I don't really understand. In stages.
Version 2 of the code just set the response type to blob but didn't try the second transformation. And the code worked, and logged something that looked like a base64 string, but a clearly incorrect string. In its entirety, it logged:
W29iamVjdCBCbG9iXQ==
Which is just goofily wrong. It's obviously too short for a 46k pdf file, and a reference base64 encoding I created with python from the commandline was much much much longer, as one would expect.
Version 3 of the code then also applies the mysterious transformation using stringFromCharCode and all the rest, which I shoved into the transform function.
However, that doesn't log anything at all---a blank line appears in the console in its appropriate place. No errors, no nonsense output, just a blank line.
I know I'm getting the correct file from prior testing. Also, the call to log the raw response object produces Blob {size: 45587, type: "application/pdf"}, which is the correct filesize for the pdf I'm experimenting with, so the blob actually contains what it should when it gets into the browser.
I'm using, and only need to support, a current version of chrome.
Can someone tell me what I'm doing wrong?
Thanks!
If you only need to support modern browsers, you should also be able to use FileReader#readAsDataURL.
That would let you do something like this:
var reader = new FileReader();
reader.addEventListener("load", function () {
console.log(reader.result);
}, false);
// The function accepts Blobs and Files
reader.readAsDataURL(x.response);
This logs a data URI, which will contain your base64 data.
I think I've found my own solution. The response type needs to be arraybuffer not blob.
I'm trying to use a combination of Ajax and data URIs to load a JPEG image and extract its EXIF data with a single HTTP request. I am modifying a library (https://github.com/kennydude/photosphere) to do this; currently this library uses two HTTP requests to set the source of the image and to get the EXIF data.
Getting the EXIF works, no problem. However I am having difficulty using the raw data from the ajax request as source for the image.
Source code for a small test of the technique:
<!DOCTYPE html>
<html>
<head>
<script type='text/javascript'>
function init()
{
// own ajax library - using it to request a test jpg image
new Ajax().sendRequest
(
"/images/photos/badger.jpg",
{ method : "GET",
callback: function(xmlHTTP)
{
var encoded = btoa (unescape(encodeURIComponent(xmlHTTP.responseText)));
var dataURL="data:image/jpeg;base64,"+encoded;
document.getElementById("image").src = dataURL;
}
}
);
}
</script>
<script type="text/javascript" src="http://www.free-map.org.uk/0.6/js/lib/Ajax.js"></script>
</head>
<body onload='init()'>
<img id="image" alt="data url loaded image" />
</body>
</html>
I get what looks like sensible jpeg data sent back, and the length (in bytes) of the raw data and the base64-encoded-then-unencoded-again raw data is the same. However the attempt to set the image src fails on both Firefox (25) and Chrome (31) (current versions) - chrome displays "broken image" icon suggesting the src is an invalid format.
I used this mozilla page for info on base64 encoding/decoding:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Base64_encoding_and_decoding
Any idea what might be wrong? Looking around I can create the base64 encoded image server side but can it be done client side like this? For one thing, base64 encoding server side obviously increases the data size and the whole purpose of this exercise is to cut down the amount of data being transferred from the server, as well as the number of requests.
Thanks,
Nick
Thanks for that. I've done a bit more digging on this and it turns out there is a solution at least on current versions of Firefox and Chrome (EDIT: IE10 works too). You can use XMLHttpRequest2 and use a typed array (Uint8Array). The following code works:
<!DOCTYPE html>
<html>
<head>
<script type='text/javascript'>
function init()
{
var xmlHTTP = new XMLHttpRequest();
xmlHTTP.open('GET','/images/photos/badger.jpg',true);
// Must include this line - specifies the response type we want
xmlHTTP.responseType = 'arraybuffer';
xmlHTTP.onload = function(e)
{
var arr = new Uint8Array(this.response);
// Convert the int array to a binary string
// We have to use apply() as we are converting an *array*
// and String.fromCharCode() takes one or more single values, not
// an array.
var raw = String.fromCharCode.apply(null,arr);
// This works!!!
var b64=btoa(raw);
var dataURL="data:image/jpeg;base64,"+b64;
document.getElementById("image").src = dataURL;
};
xmlHTTP.send();
}
</script>
</head>
<body onload='init()'>
<img id="image" alt="data url loaded image" />
</body>
</html>
Basically you ask for a binary response, then create an 8-bit unsigned int view of the data before converting it back into a (binary-friendly) string String.fromCharCode(). The apply is necessary as String.fromCharCode() does not accept an array argument. You then use btoa(), create your data url and it then works.
The following resources were useful for this:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays?redirectlocale=en-US&redirectslug=JavaScript%2FTyped_arrays
and
http://www.html5rocks.com/en/tutorials/file/xhr2/
Nick
Nick's answer works very well. But when I did this with a fairly large file, I got a stack overflow on
var raw = String.fromCharCode.apply(null,arr);
Generating the raw string in chunks worked well for me.
var raw = '';
var i,j,subArray,chunk = 5000;
for (i=0,j=arr.length; i<j; i+=chunk) {
subArray = arr.subarray(i,i+chunk);
raw += String.fromCharCode.apply(null, subArray);
}
I had trouble with the ArrayBuffer -> String -> Base64 method described above, but ran across another method using Blob that worked great. It's not a way to convert raw data to Base 64 (as in the title), but it is a way to display raw image data (as in the actual question):
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer';
xhr.onload = function() {
var blb = new Blob([xhr.response], {type: 'image/png'});
var url = (window.URL || window.webkitURL).createObjectURL(blb);
image.src = url;
}
xhr.open('GET', 'http://whatever.com/wherever');
xhr.send();
All credit goes to Jan Miksovsky, author of this fiddle. I just stumbled across it and thought it'd make a useful addition to this discussion.
Modern ES6 powered solution for image downloading: (without specifying image type)
async function downloadImageFromUrl(url) { // returns dataURL
const xmlHTTP = new XMLHttpRequest();
xmlHTTP.open('GET', url, true);
xmlHTTP.responseType = 'blob';
const imageBlob = await new Promise((resolve, reject) => {
xmlHTTP.onload = e => xmlHTTP.status >= 200 && xmlHTTP.status < 300 && xmlHTTP.response.type.startsWith('image/') ? resolve(xmlHTTP.response) : reject(Error(`wrong status or type: ${xmlHTTP.status}/${xmlHTTP.response.type}`));
xmlHTTP.onerror = reject;
xmlHTTP.send();
});
return blobToDataUrl(imageBlob);
}
function blobToDataUrl(blob) { return new Promise(resolve => {
const reader = new FileReader(); // https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications
reader.onload = e => resolve(e.target.result);
reader.readAsDataURL(blob);
})}
Usage:
downloadImageFromUrl('https://a.b/img.png').then(console.log, console.error)
I've been working for two days on this issue since I needed a solution to render the User's Outlook Profile Picture from the raw data received from Microsoft Graft. I have implemented all the solutions above, with no success. Then I found this git:
get base64 raw data of image from responseBody using jquery ajax
In my case, I just replaced "data:image/png;base64," with "data:image/jpg;base64,"
It works like a charm.
You will have to do base64 encoding on the server side as the responseText is treated as a String, and the response data that the server is sending is binary.
I'm trying to convert a blob (created with zip.js) to a base64 and persist it in the websql database. Then I would also like to do this process the other way around. Anyway, my test code (without the compression) looks something like:
var blob = new Blob([data], {
type : "text/plain"
});
blobToBase64(blob, function(b64) { // convert BLOB to BASE64
var newBlob = base64ToBlob(b64) ; // convert BASE64 to BLOB
console.log(blob.size + " != " + newBlob.size) ;
});
see a working example: http://jsfiddle.net/jeanluca/4bn5G/
So, the strange thing is, that it works in Chrome, but not in Safari (als not on my iPad).
I also tried to rewrite the base64ToBlob to
function base64ToBlob(base64) {
var binary = atob(base64);
return new Blob([binary]) ;
}
But then de uncompress doesn't work anymore, giving me an "IndexSizeError: DOM Exception 1 " exception
Any suggestion what might be wrong in my code ?
Thnx
Well I found a solution just after posting my comment.
Instead of
new Blob([data]);
do
new Blob([data.buffer]);
notice the addition of ".buffer"
I am receiving data as an "ZLIB" compressed inputstream.
Using Javascript/Ajax/JQuery, I need to uncompress it on the client side.
Is there a way to do so?
I already have this working in JAVA as below, but need to do this on Client Side.
url = new URL(getCodeBase(), dataSrcfile);
URLConnection urlConn = url.openConnection();
urlConn.setUseCaches(false);
InputStream in = urlConn.getInputStream();
InflaterInputStream inflate = new InflaterInputStream(in);
InputStreamReader inputStreamReader = new InputStreamReader(inflate);
InputStreamReader inputStreamReader = new InputStreamReader(in);
BufferedReader bufReader = new BufferedReader(inputStreamReader);
// Read until no more '#'
int i = 0;
int nHidden = 0;
String line1;
do //------------------------Parsing Starts Here
{
line1 = bufReader.readLine();
.............
...... so on
Pako is a full and modern Zlib port.
Here is a very simple example and you can work from there.
Get pako.js and you can decompress byteArray like so:
<html>
<head>
<title>Gunzipping binary gzipped string</title>
<script type="text/javascript" src="pako.js"></script>
<script type="text/javascript">
// Get datastream as Array, for example:
var charData = [31,139,8,0,0,0,0,0,0,3,5,193,219,13,0,16,16,4,192,86,214,151,102,52,33,110,35,66,108,226,60,218,55,147,164,238,24,173,19,143,241,18,85,27,58,203,57,46,29,25,198,34,163,193,247,106,179,134,15,50,167,173,148,48,0,0,0];
// Turn number array into byte-array
var binData = new Uint8Array(charData);
// Pako magic
var data = pako.inflate(binData);
// Convert gunzipped byteArray back to ascii string:
var strData = String.fromCharCode.apply(null, new Uint16Array(data));
// Output to console
console.log(strData);
</script>
</head>
<body>
Open up the developer console.
</body>
</html>
Running example: http://jsfiddle.net/9yH7M/
Alternatively you can base64 encode the array before you send it over as the Array takes up a lot of overhead when sending as JSON or XML. Decode likewise:
// Get some base64 encoded binary data from the server. Imagine we got this:
var b64Data = 'H4sIAAAAAAAAAwXB2w0AEBAEwFbWl2Y0IW4jQmziPNo3k6TuGK0Tj/ESVRs6yzkuHRnGIqPB92qzhg8yp62UMAAAAA==';
// Decode base64 (convert ascii to binary)
var strData = atob(b64Data);
// Convert binary string to character-number array
var charData = strData.split('').map(function(x){return x.charCodeAt(0);});
// Turn number array into byte-array
var binData = new Uint8Array(charData);
// Pako magic
var data = pako.inflate(binData);
// Convert gunzipped byteArray back to ascii string:
var strData = String.fromCharCode.apply(null, new Uint16Array(data));
// Output to console
console.log(strData);
Running example: http://jsfiddle.net/9yH7M/1/
To go more advanced, here is the pako API documentation.
A more recent offering is https://github.com/imaya/zlib.js
I think it's much better than the alternatives.
Our library JSXGraph contains the deflate, unzip and gunzip algorithm. Please, have a look at jsxcompressor (a spin-off from JSXGraph, see http://jsxgraph.uni-bayreuth.de/wp/download/) or at Utils.js in the source code.
Try pako https://github.com/nodeca/pako , it's not just inflate/deflate, but exact zlib port to javascript, with almost all features and options supported. Also, it's the fastest implementation in modern browsers.
Just as the first comments to your question suggest, I would suspect that you actually want the browser to handle the decompression. If I am mistaken, you might want to check out the JSXGraph library, it is supposed to contain pure JS implementations for deflate and unzip.
The js-deflate project by dankogai may be what you are looking for. I haven't actually tried it, but the rawinflate.js code seems fairly minimal, and should be able to decompress DEFLATE/zlib:ed data.
Using Pako you can decode the compressed(gzib) response, short code for above answer
JSON.parse(Pako.inflate(Buffer.from(data, 'base64'), { to: 'string' }))
Buffer.from, Pako
Browserify-zlib works perfectly for me, it uses pako and has the exact same api as zlib. After I struggled for days with compressing/decompressing zlib encoded payloads in client side with pako, I can say that browserify-zlib is really convenient.
you should see zlib rfc.
the javascript inflate code I tested: inflate in Javascript
the java code I wrote:
static public byte[] compress(byte[] input) {
Deflater deflater = new Deflater();
deflater.setInput(input, 0, input.length);
deflater.finish();
byte[] buff = new byte[input.length + 50];
deflater.deflate(buff);
int compressedSize = deflater.getTotalOut();
if (deflater.getTotalIn() != input.length)
return null;
byte[] output = new byte[compressedSize - 6];
System.arraycopy(buff, 2, output, 0, compressedSize - 6);// del head and
// foot byte
return output;
}
The very Important thing is in deflate in Java you must cut the head 2 byte,foot 4 byte,to get the raw deflate.