Related
I want to display OpenOffice files, .odt and .odp at client side using a web browser.
These files are zipped files. Using Ajax, I can get these files from server but these are zipped files. I have to unzip them using JavaScript, I have tried using inflate.js, http://www.onicos.com/staff/iz/amuse/javascript/expert/inflate.txt, but without success.
How can I do this?
I wrote an unzipper in Javascript. It works.
It relies on Andy G.P. Na's binary file reader and some RFC1951 inflate logic from notmasteryet. I added the ZipFile class.
working example:
http://cheeso.members.winisp.net/Unzip-Example.htm (dead link)
The source:
http://cheeso.members.winisp.net/srcview.aspx?dir=js-unzip (dead link)
NB: the links are dead; I'll find a new host soon.
Included in the source is a ZipFile.htm demonstration page, and 3 distinct scripts, one for the zipfile class, one for the inflate class, and one for a binary file reader class. The demo also depends on jQuery and jQuery UI. If you just download the js-zip.zip file, all of the necessary source is there.
Here's what the application code looks like in Javascript:
// In my demo, this gets attached to a click event.
// it instantiates a ZipFile, and provides a callback that is
// invoked when the zip is read. This can take a few seconds on a
// large zip file, so it's asynchronous.
var readFile = function(){
$("#status").html("<br/>");
var url= $("#urlToLoad").val();
var doneReading = function(zip){
extractEntries(zip);
};
var zipFile = new ZipFile(url, doneReading);
};
// this function extracts the entries from an instantiated zip
function extractEntries(zip){
$('#report').accordion('destroy');
// clear
$("#report").html('');
var extractCb = function(id) {
// this callback is invoked with the entry name, and entry text
// in my demo, the text is just injected into an accordion panel.
return (function(entryName, entryText){
var content = entryText.replace(new RegExp( "\\n", "g" ), "<br/>");
$("#"+id).html(content);
$("#status").append("extract cb, entry(" + entryName + ") id(" + id + ")<br/>");
$('#report').accordion('destroy');
$('#report').accordion({collapsible:true, active:false});
});
}
// for each entry in the zip, extract it.
for (var i=0; i<zip.entries.length; i++) {
var entry = zip.entries[i];
var entryInfo = "<h4><a>" + entry.name + "</a></h4>\n<div>";
// contrive an id for the entry, make it unique
var randomId = "id-"+ Math.floor((Math.random() * 1000000000));
entryInfo += "<span class='inputDiv'><h4>Content:</h4><span id='" + randomId +
"'></span></span></div>\n";
// insert the info for one entry as the last child within the report div
$("#report").append(entryInfo);
// extract asynchronously
entry.extract(extractCb(randomId));
}
}
The demo works in a couple of steps: The readFile fn is triggered by a click, and instantiates a ZipFile object, which reads the zip file. There's an asynchronous callback for when the read completes (usually happens in less than a second for reasonably sized zips) - in this demo the callback is held in the doneReading local variable, which simply calls extractEntries, which
just blindly unzips all the content of the provided zip file. In a real app you would probably choose some of the entries to extract (allow the user to select, or choose one or more entries programmatically, etc).
The extractEntries fn iterates over all entries, and calls extract() on each one, passing a callback. Decompression of an entry takes time, maybe 1s or more for each entry in the zipfile, which means asynchrony is appropriate. The extract callback simply adds the extracted content to an jQuery accordion on the page. If the content is binary, then it gets formatted as such (not shown).
It works, but I think that the utility is somewhat limited.
For one thing: It's very slow. Takes ~4 seconds to unzip the 140k AppNote.txt file from PKWare. The same uncompress can be done in less than .5s in a .NET program. EDIT: The Javascript ZipFile unpacks considerably faster than this now, in IE9 and in Chrome. It is still slower than a compiled program, but it is plenty fast for normal browser usage.
For another: it does not do streaming. It basically slurps in the entire contents of the zipfile into memory. In a "real" programming environment you could read in only the metadata of a zip file (say, 64 bytes per entry) and then read and decompress the other data as desired. There's no way to do IO like that in javascript, as far as I know, therefore the only option is to read the entire zip into memory and do random access in it. This means it will place unreasonable demands on system memory for large zip files. Not so much a problem for a smaller zip file.
Also: It doesn't handle the "general case" zip file - there are lots of zip options that I didn't bother to implement in the unzipper - like ZIP encryption, WinZip encryption, zip64, UTF-8 encoded filenames, and so on. (EDIT - it handles UTF-8 encoded filenames now). The ZipFile class handles the basics, though. Some of these things would not be hard to implement. I have an AES encryption class in Javascript; that could be integrated to support encryption. Supporting Zip64 would probably useless for most users of Javascript, as it is intended to support >4gb zipfiles - don't need to extract those in a browser.
I also did not test the case for unzipping binary content. Right now it unzips text. If you have a zipped binary file, you'd need to edit the ZipFile class to handle it properly. I didn't figure out how to do that cleanly. It does binary files now, too.
EDIT - I updated the JS unzip library and demo. It now does binary files, in addition to text. I've made it more resilient and more general - you can now specify the encoding to use when reading text files. Also the demo is expanded - it shows unzipping an XLSX file in the browser, among other things.
So, while I think it is of limited utility and interest, it works. I guess it would work in Node.js.
I'm using zip.js and it seems to be quite useful. It's worth a look!
Check the Unzip demo, for example.
I found jszip quite useful. I've used so far only for reading, but they have create/edit capabilities as well.
Code wise it looks something like this
var new_zip = new JSZip();
new_zip.load(file);
new_zip.files["doc.xml"].asText() // this give you the text in the file
One thing I noticed is that it seems the file has to be in binary stream format (read using the .readAsArrayBuffer of FileReader(), otherwise I was getting errors saying I might have a corrupt zip file
Edit: Note from the 2.x to 3.0.0 upgrade guide:
The load() method and the constructor with data (new JSZip(data)) have
been replaced by loadAsync().
Thanks user2677034
If you need to support other formats as well or just need good performance, you can use this WebAssembly library
it's promised based, it uses WebWorkers for threading and API is actually simple ES module
How to use
Install with npm i libarchive.js and use it as a ES module.
The library consists of two parts: ES module and webworker bundle, ES module part is your interface to talk to library, use it like any other module. The webworker bundle lives in the libarchive.js/dist folder so you need to make sure that it is available in your public folder since it will not get bundled if you're using bundler (it's all bundled up already) and specify correct path to Archive.init() method.
import {Archive} from 'libarchive.js/main.js';
Archive.init({
workerUrl: 'libarchive.js/dist/worker-bundle.js'
});
document.getElementById('file').addEventListener('change', async (e) => {
const file = e.currentTarget.files[0];
const archive = await Archive.open(file);
let obj = await archive.extractFiles();
console.log(obj);
});
// outputs
{
".gitignore": {File},
"addon": {
"addon.py": {File},
"addon.xml": {File}
},
"README.md": {File}
}
I wrote "Binary Tools for JavaScript", an open source project that includes the ability to unzip, unrar and untar: https://github.com/codedread/bitjs
Used in my comic book reader: https://github.com/codedread/kthoom (also open source).
HTH!
If anyone's reading images or other binary files from a zip file hosted at a remote server, you can use following snippet to download and create zip object using the jszip library.
// this function just get the public url of zip file.
let url = await getStorageUrl(path)
console.log('public url is', url)
//get the zip file to client
axios.get(url, { responseType: 'arraybuffer' }).then((res) => {
console.log('zip download status ', res.status)
//load contents into jszip and create an object
jszip.loadAsync(new Blob([res.data], { type: 'application/zip' })).then((zip) => {
const zipObj = zip
$.each(zip.files, function (index, zipEntry) {
console.log('filename', zipEntry.name)
})
})
Now using the zipObj you can access the files and create a src url for it.
var fname = 'myImage.jpg'
zipObj.file(fname).async('blob').then((blob) => {
var blobUrl = URL.createObjectURL(blob)
Support for gzip in JavaScript is surprisingly weak. All browsers implement it for supporting the Content-encoding: gzip header, but there is no standard access to the gzip / gunzip function of the browser. So one must use a javascript only approach. There are some old gzip-js libraries around, but they don't seem stream-enabled and 6 years out of maintenance.
Then there is pako, more actively maintained, but that also doesn't seen stream enabled if using their own distribution, so you need to hold the entire binary array and the gzip output in memory. I might be wrong, but that is what I am gathering.
JSZip is a well designed tool and has support for streams "Workers". JSZip uses pako. ZIP entries are DEFLATEd and have a CRC32 checksum just like gzip, only slightly differently organized of course. Just from contemplating the JSZip sources, it looks like it could be easy to expose the gzip compression option of pako into the stream support of JSZip. And if I use both JSZip and also need gzip, why would I want to load pako twice?
I was hoping I could just hack my way through into the internals of JSZip to the underlying Workers and using the pako based "Flate" (i.e., in-flate / de-flate) implementation with the gzip option recognized by pako. Explored it with the Chrome javascript console, but I can't get through. The distributable loadable jszip.js or jszip-min.js are hiding all the internals from access to scripts. I cannot open that box.
So I have been looking at the git hub source code to see if I could build my own jszip.js or jszip-min.js loadable module where I would export more of the internal resources for use in my page, but having been in this for 20 years, UNIX make files, ant, everything, I feel like a complete novice when it comes to these tricks to packaging javascript modules and I see bower and "gruntfiles" which all seem to be related to node.js, which I don't need (only client-side browser) and never worked with, so I have no idea where to start.
As Evert was saying, I should have checked first for the build instructions in the documentation https://stuk.github.io/jszip/documentation/contributing.html.
From that it is clear, first one needs git and makes a local clone. Then one needs to set up the grunt command line, which requires, npm, which comes with nodejs. Once grunt runs, there are other dependencies that need to be npm install-ed. It's the usual little things off and not working, but enough Googling and brute force retrying to get it done.
Now jszip/lib/index.js contains the resource that is finally exported. It is that JSZip object. So just to play with the internal stuff, I could add these to the JSZip object, for example, it already contains:
JSZip.external = require("./external");
module.exports = JSZip;
and so we can easily add other resources we want to play with:
JSZip.flate = require("./flate");
JSZip.DataWorker = require('./stream/DataWorker');
JSZip.DataLengthProbe = require('./stream/DataLengthProbe');
JSZip.Crc32Probe = require('./stream/Crc32Probe');
JSZip.StreamHelper = require('./stream/StreamHelper');
JSZip.pako = require("pako");
Now with that, I can create a proof of concept in the Chrome debugger:
(new JSZip.StreamHelper(
(new JSZip.DataWorker(Promise.resolve("Hello World! Hello World! Hello World! Hello World! Hello World! Hello World!")))
.pipe(new JSZip.DataLengthProbe("uncompressedSize"))
.pipe(new JSZip.Crc32Probe())
.pipe(JSZip.flate.compressWorker({}))
.pipe(new JSZip.DataLengthProbe("compressedSize"))
.on("end", function(event) { console.log("onEnd: ", this.streamInfo) }),
"uint8array", "")
).accumulate(function(data) { console.log("acc: ", data); })
.then(function(data) { console.log("then: ", data); })
and this works. I have been making myself a GZipFileStream with gzip header and trailer, creating everything correctly. I put a jszip/lib/generate/GZipFileWorker.js in as follows:
'use strict';
var external = require('../external');
var utils = require('../utils');
var flate = require('../flate');
var GenericWorker = require('../stream/GenericWorker');
var DataWorker = require('../stream/DataWorker');
var StreamHelper = require('../stream/StreamHelper');
var DataLengthProbe = require('../stream/DataLengthProbe');
var Crc32Probe = require('../stream/Crc32Probe');
function GZipFileWorker() {
GenericWorker.call(this, "GZipFileWorker");
this.virgin = true;
}
utils.inherits(GZipFileWorker, GenericWorker);
GZipFileWorker.prototype.processChunk = function(chunk) {
if(this.virgin) {
this.virgin = false;
var headerBuffer = new ArrayBuffer(10);
var headerView = new DataView(headerBuffer);
headerView.setUint16(0, 0x8b1f, true); // GZip magic
headerView.setUint8(2, 0x08); // compression algorithm DEFLATE
headerView.setUint8(3, 0x00); // flags
// bit 0 FTEXT
// bit 1 FHCRC
// bit 2 FEXTRA
// bit 3 FNAME
// bit 4 FCOMMENT
headerView.setUint32(4, (new Date()).getTime()/1000>>>0, true);
headerView.setUint8(8, 0x00); // no extension headers
headerView.setUint8(9, 0x03); // OS type UNIX
this.push({data: new Uint8Array(headerBuffer)});
}
this.push(chunk);
};
GZipFileWorker.prototype.flush = function() {
var trailerBuffer = new ArrayBuffer(8);
var trailerView = new DataView(trailerBuffer);
trailerView.setUint32(0, this.streamInfo["crc32"]>>>0, true);
trailerView.setUint32(4, this.streamInfo["originalSize"]>>>0 & 0xffffffff, true);
this.push({data: new Uint8Array(trailerBuffer)});
};
exports.gzip = function(data, inputFormat, outputFormat, compressionOptions, onUpdate) {
var mimeType = data.contentType || data.mimeType || "";
if(! (data instanceof GenericWorker)) {
inputFormat = (inputFormat || "").toLowerCase();
data = new DataWorker(
utils.prepareContent(data.name || "gzip source",
data,
inputFormat !== "string",
inputFormat === "binarystring",
inputFormat === "base64"));
}
return new StreamHelper(
data
.pipe(new DataLengthProbe("originalSize"))
.pipe(new Crc32Probe())
.pipe(flate.compressWorker( compressionOptions || {} ))
.pipe(new GZipFileWorker()),
outputFormat.toLowerCase(), mimeType).accumulate(onUpdate);
};
and in jszip/lib/index.js I need just this:
var gzip = require("./generate/GZipFileWorker");
JSZip.gzip = gzip.gzip;
and this works like that:
JSZip.gzip("Hello World! Hello World! Hello World! Hello World! Hello World! Hello World!", "string", "base64", {level: 3}).then(function(result) { console.log(result); })
I can paste the result into a UNIX pipe like this:
$ echo -n "H4sIAOyR/VsAA/NIzcnJVwjPL8pJUVTwoJADAPCORolNAAAA" |base64 -d |zcat
and it correctly returns
Hello World! Hello World! Hello World! Hello World! Hello World! Hello World!
It can also be used with files:
JSZip.gzip(file, "", "Blob").then(function(blob) {
xhr.setRequestProperty("Content-encoding", "gzip");
xhr.send(blob);
})
and I can send the blob to my web server. I have checked that indeed the large file is processed in chunks.
The only thing I don't like about this is that the final blob is still assembled as one big Blob, so I am assuming it holds all compressed data in memory. It would be better if that Blow was an end-point of that Worker pipeline so that when the xhr.send grabs the data chunk-wise from the Blob, it would consume chunks from the Worker pipeline only then. However, the impact is lessened a lot given that it only holds compressed content, and likely (for me at least) large files would be multi-media files that won't need to be gzip compressed anyway.
I did not write a gunzip function, because frankly, I don't need one and I don't want to make one that fails to properly parse extension headers in the gzip headers. As soon as I have uploaded compressed content to the server (S3 in my case), when I'm fetching it again I assume the browser would do the decompressing for me. I haven't checked that though. If it's becoming a problem I'll come back end edit this answer more.
Here is my fork on github: https://github.com/gschadow/jszip, pull request already entered.
I'm trying to write a function, that would use native openssl to do some RSA heavy-lifting for me, rather than using a js RSA library. The target is to
Read binary data from a file
Do some processing in the node process, using JS, resulting in a Buffer containing binary data
Write the buffer to the stdin stream of the exec command
RSA encrypt/decrypt the data and write it to the stdout stream
Get the input data back to a Buffer in the JS-process for further processing
The child process module in Node has an exec command, but I fail to see how I can pipe the input to the process and pipe it back to my process. Basically I'd like to execute the following type of command, but without having to rely on writing things to files (didn't check the exact syntax of openssl)
cat the_binary_file.data | openssl -encrypt -inkey key_file.pem -certin > the_output_stream
I could do this by writing a temp file, but I'd like to avoid it, if possible. Spawning a child process allows me access to stdin/out but haven't found this functionality for exec.
Is there a clean way to do this in the way I drafted here? Is there some alternative way of using openssl for this, e.g. some native bindings for openssl lib, that would allow me to do this without relying on the command line?
You've mentioned spawn but seem to think you can't use it. Possibly showing my ignorance here, but it seems like it should be just what you're looking for: Launch openssl via spawn, then write to child.stdin and read from child.stdout. Something very roughly like this completely untested code:
var util = require('util'),
spawn = require('child_process').spawn;
function sslencrypt(buffer_to_encrypt, callback) {
var ssl = spawn('openssl', ['-encrypt', '-inkey', ',key_file.pem', '-certin']),
result = new Buffer(SOME_APPROPRIATE_SIZE),
resultSize = 0;
ssl.stdout.on('data', function (data) {
// Save up the result (or perhaps just call the callback repeatedly
// with it as it comes, whatever)
if (data.length + resultSize > result.length) {
// Too much data, our SOME_APPROPRIATE_SIZE above wasn't big enough
}
else {
// Append to our buffer
resultSize += data.length;
data.copy(result);
}
});
ssl.stderr.on('data', function (data) {
// Handle error output
});
ssl.on('exit', function (code) {
// Done, trigger your callback (perhaps check `code` here)
callback(result, resultSize);
});
// Write the buffer
ssl.stdin.write(buffer_to_encrypt);
}
You should be able to set encoding to binary when you make a call to exec, like..
exec("openssl output_something_in_binary", {encoding: 'binary'}, function(err, out, err) {
//do something with out - which is in the binary format
});
If you want to write out the content of "out" in binary, make sure to set the encoding to binary again, like..
fs.writeFile("out.bin", out, {encoding: 'binary'});
I hope this helps!
I'm trying to write a string to a socket (socket is called "response"). Here is the code I have sofar (I'm trying to implement a byte caching proxy...):
var http = require('http');
var sys=require('sys');
var localHash={};
http.createServer(function(request, response) {
var proxy = http.createClient(80, request.headers['host'])
var proxy_request = proxy.request(request.method, request.url, request.headers);
proxy_request.addListener('response', function (proxy_response) {
proxy_response.addListener('data', function(x) {
var responseData=x.toString();
var f=50;
var toTransmit="";
var p=0;
var N=responseData.length;
if(N>f){
p=Math.floor(N/f);
var hash="";
var chunk="";
for(var i=0;i<p;i++){
chunk=responseData.substr(f*i,f);
hash=DJBHash(chunk);
if(localHash[hash]==undefined){
localHash[hash]=chunk;
toTransmit=toTransmit+chunk;
}else{
sys.puts("***hit"+chunk);
toTransmit=toTransmit+chunk;//"***EOH"+hash;
}
}
//remainder:
chunk=responseData.substr(f*p);
hash=DJBHash(chunk);
if(localHash[hash]==undefined){
localHash[hash]=chunk;
toTransmit=toTransmit+chunk;
}else{
toTransmit=toTransmit+chunk;//"***EOH"+hash;
}
}else{
toTransmit=responseData;
}
response.write(new Buffer(toTransmit)); /*error occurs here */
});
proxy_response.addListener('end', function() {
response.end();
});
response.writeHead(proxy_response.statusCode, proxy_response.headers);
});
request.addListener('data', function(chunk) {
sys.puts(chunk);
proxy_request.write(chunk, 'binary');
});
request.addListener('end', function() {
proxy_request.end();
});
}).listen(8080);
function DJBHash(str) {
var hash = 5381;
for(var i = 0; i < str.length; i++) {
hash = (((hash << 5) + hash) + str.charCodeAt(i)) & 0xffffffff;
}
if(hash<-1){
hash=hash*-1;
}
return hash;
}
The trouble is, I keep getting a "content encoding error" in Firefox. It's as if the gizipped content isn't being transmitted properly. I've ensured that "toTransmit" is the same as "x" via console.log(x) and console.log(toTransmit).
It's worth noting that if I replace response.write(new Buffer(toTransmit)) with simply response.write(x), the proxy works as expected, but I need to do some payload analysis and then pass "toTransmit", not "x".
I've also tried to response.write(toTransmit) (i.e. without the conversion to buffer) and I keep getting the same content encoding error.
I'm really stuck. I thought I had this problem fixed by converting the string to a buffer as per another thread (http://stackoverflow.com/questions/7090510/nodejs-content-encoding-error), but I've re-opened a new thread to discuss this new problem I'm experiencing.
I should add that if I open a page via the proxy in Opera, I get gobblydeegook - it's as if the gzipped data gets corrupted.
Any insight greatly appreciated.
Many thanks in advance,
How about this?
var responseData = Buffer.from(x, 'utf8');
from: Convert string to buffer Node
Without digging very deep into your code, it seems to me that you might want to change
var responseData=x.toString();
to
var responseData=x.toString("binary");
and finally
response.write(new Buffer(toTransmit, "binary"));
From the docs:
Pure Javascript is Unicode friendly but not nice to binary data. When
dealing with TCP streams or the file system, it's necessary to handle
octet streams. Node has several strategies for manipulating, creating,
and consuming octet streams.
Raw data is stored in instances of the Buffer class. A Buffer is
similar to an array of integers but corresponds to a raw memory
allocation outside the V8 heap. A Buffer cannot be resized.
So, don't use strings for handling binary data.
Change proxy_request.write(chunk, 'binary'); to proxy_request.write(chunk);.
Omit var responseData=x.toString();, that's a bad idea.
Instead of doing substr on a string, use slice on a buffer.
Instead of doing + with strings, use the "concat" method from the buffertools.
Actually, now new Buffer() is deprecated sence node.js v10+, so better to use
Buffer.from(,)
from
response.write(new Buffer(toTransmit));
do
response.write(Buffer.from(toTransmit,'binary'));
I want to display OpenOffice files, .odt and .odp at client side using a web browser.
These files are zipped files. Using Ajax, I can get these files from server but these are zipped files. I have to unzip them using JavaScript, I have tried using inflate.js, http://www.onicos.com/staff/iz/amuse/javascript/expert/inflate.txt, but without success.
How can I do this?
I wrote an unzipper in Javascript. It works.
It relies on Andy G.P. Na's binary file reader and some RFC1951 inflate logic from notmasteryet. I added the ZipFile class.
working example:
http://cheeso.members.winisp.net/Unzip-Example.htm (dead link)
The source:
http://cheeso.members.winisp.net/srcview.aspx?dir=js-unzip (dead link)
NB: the links are dead; I'll find a new host soon.
Included in the source is a ZipFile.htm demonstration page, and 3 distinct scripts, one for the zipfile class, one for the inflate class, and one for a binary file reader class. The demo also depends on jQuery and jQuery UI. If you just download the js-zip.zip file, all of the necessary source is there.
Here's what the application code looks like in Javascript:
// In my demo, this gets attached to a click event.
// it instantiates a ZipFile, and provides a callback that is
// invoked when the zip is read. This can take a few seconds on a
// large zip file, so it's asynchronous.
var readFile = function(){
$("#status").html("<br/>");
var url= $("#urlToLoad").val();
var doneReading = function(zip){
extractEntries(zip);
};
var zipFile = new ZipFile(url, doneReading);
};
// this function extracts the entries from an instantiated zip
function extractEntries(zip){
$('#report').accordion('destroy');
// clear
$("#report").html('');
var extractCb = function(id) {
// this callback is invoked with the entry name, and entry text
// in my demo, the text is just injected into an accordion panel.
return (function(entryName, entryText){
var content = entryText.replace(new RegExp( "\\n", "g" ), "<br/>");
$("#"+id).html(content);
$("#status").append("extract cb, entry(" + entryName + ") id(" + id + ")<br/>");
$('#report').accordion('destroy');
$('#report').accordion({collapsible:true, active:false});
});
}
// for each entry in the zip, extract it.
for (var i=0; i<zip.entries.length; i++) {
var entry = zip.entries[i];
var entryInfo = "<h4><a>" + entry.name + "</a></h4>\n<div>";
// contrive an id for the entry, make it unique
var randomId = "id-"+ Math.floor((Math.random() * 1000000000));
entryInfo += "<span class='inputDiv'><h4>Content:</h4><span id='" + randomId +
"'></span></span></div>\n";
// insert the info for one entry as the last child within the report div
$("#report").append(entryInfo);
// extract asynchronously
entry.extract(extractCb(randomId));
}
}
The demo works in a couple of steps: The readFile fn is triggered by a click, and instantiates a ZipFile object, which reads the zip file. There's an asynchronous callback for when the read completes (usually happens in less than a second for reasonably sized zips) - in this demo the callback is held in the doneReading local variable, which simply calls extractEntries, which
just blindly unzips all the content of the provided zip file. In a real app you would probably choose some of the entries to extract (allow the user to select, or choose one or more entries programmatically, etc).
The extractEntries fn iterates over all entries, and calls extract() on each one, passing a callback. Decompression of an entry takes time, maybe 1s or more for each entry in the zipfile, which means asynchrony is appropriate. The extract callback simply adds the extracted content to an jQuery accordion on the page. If the content is binary, then it gets formatted as such (not shown).
It works, but I think that the utility is somewhat limited.
For one thing: It's very slow. Takes ~4 seconds to unzip the 140k AppNote.txt file from PKWare. The same uncompress can be done in less than .5s in a .NET program. EDIT: The Javascript ZipFile unpacks considerably faster than this now, in IE9 and in Chrome. It is still slower than a compiled program, but it is plenty fast for normal browser usage.
For another: it does not do streaming. It basically slurps in the entire contents of the zipfile into memory. In a "real" programming environment you could read in only the metadata of a zip file (say, 64 bytes per entry) and then read and decompress the other data as desired. There's no way to do IO like that in javascript, as far as I know, therefore the only option is to read the entire zip into memory and do random access in it. This means it will place unreasonable demands on system memory for large zip files. Not so much a problem for a smaller zip file.
Also: It doesn't handle the "general case" zip file - there are lots of zip options that I didn't bother to implement in the unzipper - like ZIP encryption, WinZip encryption, zip64, UTF-8 encoded filenames, and so on. (EDIT - it handles UTF-8 encoded filenames now). The ZipFile class handles the basics, though. Some of these things would not be hard to implement. I have an AES encryption class in Javascript; that could be integrated to support encryption. Supporting Zip64 would probably useless for most users of Javascript, as it is intended to support >4gb zipfiles - don't need to extract those in a browser.
I also did not test the case for unzipping binary content. Right now it unzips text. If you have a zipped binary file, you'd need to edit the ZipFile class to handle it properly. I didn't figure out how to do that cleanly. It does binary files now, too.
EDIT - I updated the JS unzip library and demo. It now does binary files, in addition to text. I've made it more resilient and more general - you can now specify the encoding to use when reading text files. Also the demo is expanded - it shows unzipping an XLSX file in the browser, among other things.
So, while I think it is of limited utility and interest, it works. I guess it would work in Node.js.
I'm using zip.js and it seems to be quite useful. It's worth a look!
Check the Unzip demo, for example.
I found jszip quite useful. I've used so far only for reading, but they have create/edit capabilities as well.
Code wise it looks something like this
var new_zip = new JSZip();
new_zip.load(file);
new_zip.files["doc.xml"].asText() // this give you the text in the file
One thing I noticed is that it seems the file has to be in binary stream format (read using the .readAsArrayBuffer of FileReader(), otherwise I was getting errors saying I might have a corrupt zip file
Edit: Note from the 2.x to 3.0.0 upgrade guide:
The load() method and the constructor with data (new JSZip(data)) have
been replaced by loadAsync().
Thanks user2677034
If you need to support other formats as well or just need good performance, you can use this WebAssembly library
it's promised based, it uses WebWorkers for threading and API is actually simple ES module
How to use
Install with npm i libarchive.js and use it as a ES module.
The library consists of two parts: ES module and webworker bundle, ES module part is your interface to talk to library, use it like any other module. The webworker bundle lives in the libarchive.js/dist folder so you need to make sure that it is available in your public folder since it will not get bundled if you're using bundler (it's all bundled up already) and specify correct path to Archive.init() method.
import {Archive} from 'libarchive.js/main.js';
Archive.init({
workerUrl: 'libarchive.js/dist/worker-bundle.js'
});
document.getElementById('file').addEventListener('change', async (e) => {
const file = e.currentTarget.files[0];
const archive = await Archive.open(file);
let obj = await archive.extractFiles();
console.log(obj);
});
// outputs
{
".gitignore": {File},
"addon": {
"addon.py": {File},
"addon.xml": {File}
},
"README.md": {File}
}
I wrote "Binary Tools for JavaScript", an open source project that includes the ability to unzip, unrar and untar: https://github.com/codedread/bitjs
Used in my comic book reader: https://github.com/codedread/kthoom (also open source).
HTH!
If anyone's reading images or other binary files from a zip file hosted at a remote server, you can use following snippet to download and create zip object using the jszip library.
// this function just get the public url of zip file.
let url = await getStorageUrl(path)
console.log('public url is', url)
//get the zip file to client
axios.get(url, { responseType: 'arraybuffer' }).then((res) => {
console.log('zip download status ', res.status)
//load contents into jszip and create an object
jszip.loadAsync(new Blob([res.data], { type: 'application/zip' })).then((zip) => {
const zipObj = zip
$.each(zip.files, function (index, zipEntry) {
console.log('filename', zipEntry.name)
})
})
Now using the zipObj you can access the files and create a src url for it.
var fname = 'myImage.jpg'
zipObj.file(fname).async('blob').then((blob) => {
var blobUrl = URL.createObjectURL(blob)