How to decompress an object compressed by zlib in JavaScript? - javascript

In JavaScript project, I want to decompress an object compressed by zlib(Zopfli.js) and I'm trying it with pako.min.js.
However, the example at the official site of pako uses require function which does not exist in JavaScript. Maybe Node.js has this but I'm afraid it would take a lot of time and pains to combine this JavaScript project with Node.js, because I know nothing about Node.js.
Is it any way to get through with this, or another way to decompress the object?
Any information would be appreciated.
What I've already tried
I've already tried zlib.js library for decomressing, but the result is catching the error below which I couldn't find any solution:
const compressed = dataCompressedByZlib;
const inflate = new Zlib.Inflate(compressed);
const plain = inflate.decompress();// -> input buffer is broken

You may use pako.js from here for client side javascript and import it as -
<script type="text/javascript" src="pako.js"></script>
inside your html, as mentioned here-
https://stackoverflow.com/a/22675078/7895283

Related

How can I read a file's metadata in Node.js, beyond what the fs.statSync provides without using a library?

This is a topic where I can't seem to find the answer on the Node.js docs (I know it's possible because of libraries like exif), nor can I find an answer on the internet without everyone saying to just use a library.
I don't want to use a library, so I want to do this natively and learn more about reading file metadata, and maybe eventually updating the metadata too while building my own mini-tool.
If I run something like fs.statSync() I can get generic metadata that returns in the Stats object; but, in my case, I'm looking for all the other metadata, NOT just the basic file info like size, birthtime, etc.
I want the other metadata like dimensions, date taken, and especially things you'd see in image, video, or audio files.
Maybe there's something like:
const deepMetaData = fs.readFileSync().getMetaDataAsString();
console.info(/Date Taken/.test(deepMetaData)); // true
or
const deepMetaData = fs.createReadStream().buffer().toString();
const dateTaken = deepMetaData.match(/Date Taken: (\d{4}-\d{2}-\d{2})/)[1];
console.info(dateTaken);
If I need to work with buffers, streams, whatever, instead of a string output, that's cool too. Ideally something synchronous. So if there's a simple example someone could provide of how to read that kind of meta data without a library, I'll at least be able to look up the methods used from that to understand more later and leverage the docs associated with whatever approach. Thank you!
Nodejs fs functions like fs.statSync() provide OS level metadata on the file only (such as createDate, modificationDate, file size, etc...). These are properties of the file in the file system. These do NOT have anything at all to do with the actual data of the file itself.
When you talk about EXIF (for a photo), this is parsed from the file data itself. To know about that type of data, you must read and parse at least the beginning of the file and you must be able to recognize and understand all the different file formats that you might encounter. For photos, this would include JPEG, PNG, HEIC, GIF, etc... Each of those have different file formats and will require unique code for understanding the metadata embedded in the file.
Nodejs does not have support for any of that built-in.
So, it will take custom code for each file type. If you further want to include other types of files like videos, you need to extend your list of different file types you can read, parse and understand. For the depth of files you're talking about this is a big job, particular when it comes to testing against all the different variants of files and metadata that exist out in the wild.
I personally would be fine with implementing my own code for one particular file type like JPEG, but if I was tasked with supporting dozens of types of files and particularly if tasked with supporting the wide range of video file formats, I'd immediately seek out help from existing libraries that have already done all the time consuming work to research, write and test how to properly read and understand all the variants.
I know it's possible because of libraries like exif
This is an example of a library that reads the beginning of the image file, parses it according to the expected format and knows how to interpret all the possible tags that can be in the EXIF header and what they all mean.
So if there's a simple example someone could provide of how to read that kind of meta data without a library
Go study the code for the EXIF library and see how it works. If you're going to implement it yourself, that's how you have to do it. I'm still not sure why you'd avoid using working libraries if they already exist. That is one of the biggest advantages of the nodejs ecosystem - you can build on all the open source code that already exists without reimplementing it all from scratch yourself and spend your coding time on parts of your problem that someone else has not already implemented.
how would one read that metadata using node?
You literally have to read the data from the file (usually at the start of the file). You can use any of the mechanisms that the fs module provides. For example, you can use fs.createReadStream() and then stream in the file, parsing and interpreting it as data arrives and then stop the stream when you get past the end of the metadata. Of, you can open a file handle using fs.open() and use fs.read() to read chunks of the file until you have read enough to have all the metadata.
You HAVE an example sitting right in front of you of code that does this in the EXIF library on NPM that you already seem to know about. Just go examine its code. The code is ALL there.
I'm just looking for a simple answer on getting that info, even if it's a blob of strings.
This is perhaps your main problem. There is no simple answer to get that info and it doesn't just exist as a blob of strings. These files are sometimes binary files (for space efficiency reasons). You have to learn how to read and parse binary data. Go study the code in the EXIF library and see what it is already doing and you can learn from that. There is no better example to start with.
But, for a simple example using the heic filetype, this will grab the first 5000 characters of the file's metadata, which can then be searched:
const fileDescriptor = fs.openSync(absPathToHeicPhoto);
const charCount = 5000;
const buffer = Buffer.alloc(charCount);
const headerBytes = fs.readSync(fileDescriptor, buffer, 0, charCount);
const bufferAsStr = buffer.toString('utf8', 0, charCount);
console.info(/\d{4}:\d{2}:\d{2}/.test(bufferAsStr));
FYI, I looked at the code for this EXIF library on NPM and it's poorly implemented. It uses fs.readFile() to load the ENTIRE image into RAM (even though it only needs a fraction of the data at the start of the file). This is a poor implementation for this reason (memory and disk inefficient).
But, it does have a method called processImage and one called extractExifData that process the binary data of the file to parse out the EXIF info. These are links to its actual code. You can start learning there.
FYI, as a photographer, I use a command line program called exiftool that will dump exif info to stdout or to a file for many images. As a different approach, you could just run that tool from your nodejs program (using the child_process module and capture its output and use that output, letting it do the hard work you just operate on the generated output.

Calculate file checksum in JS using ReadableStream and PipeTo?

I wish to calculate a file's checksum locally with JS and have been searching for examples of how to accomplish this using the Streams Api and pipeTo, but have only found examples with Node.js which I am not familiar with. What I'm looking for is something like this:
var stream = file.stream();
var hash = CryptoJS.algo.SHA1.create();
await stream.pipeTo(hash);
console.log(hash);
This code does not work since CryptoJS doesn't seem to create WriteableStream. Can I wrap what CryptoJS creates in some kind of shim or sink? Other suggestions?
I could read and append buffers incrementally of course, but the code becomes a bit messy so I was hoping I could leverage pipeTo to avoid complicating the code.

How to read worldedit .schem file with nodejs

I'm trying to make a node application that uploads .schem files but I don't know how to use the fs import to read the file without modifying it. When I try to read and write the file again the data gets modified and world edit in minecraft can't read it.
Can someone please help me with this?
As you can see here the .schem files are saved using NBT, and its structure.
the nbt format is using binary format (comparing to json that you can read and modify easily). so you probably want to use an existing library, i have never programed in node.js but i found this, if it helps

Javascript app: require is not defined

I have an array and I want to create a .json to store the array in it. This is what I have, but i receive require is not defined. I know it has something to do with NodeJS, but I dont know what I should do.
let answersString = JSON.stringify(answersArray);
const fs = require('fs');
fs.writeFileSync("answers.json", answersString);
Thanks!
EDIT: Now I know this was a pretty dumb question, sorry. In the meantime I learned about node, bundling, testing etc.
You're using the code in client, and require() does not exists in javascript browser-side (nodeJS is server side...).
My proposal would be to send file to server side where you will have fs and do the work there.
Check this answer
You still can use require at the client side but you need to use a bundler. The notorious ones are Webpack and Browserify.

What's the best way to read Sqlite3 directly in Browser using Javascript?

For one of our Insights platform, we plan to generate summary SQLite3 databases in the background and let it be rendered on the browser as charts. Currently, we are intending to a server-side endpoint that will service the data requirement.
We are looking to optimize this further by eliminating the server-side endpoint altogether. We are fine (from a security perspective) to expose the SQLite3 directly on S3 and have a javascript module read and generate the charts.
The SQLite3 files are expected to fairly small - perhaps 4-6 columns and perhaps 10-500 rows of data, and all of them containing one table only. Test runs indicate file sizes of less than 15KB.
We don't intend to write or manipulate the SQLite3 on the browser.
We don't need to cache it on the browser as a WebSQL or an IndexedDB form, but we are ok with using them if that is what is needed.
From my web searches, We are unable to find a Javascript library that can read a SQLite3 file and query it for results. If you know of any javascript libraries that can do this, then please let us know.
On the other hand, if you think that we shouldn't be doing this for whatever reason, then please throw them as comments/answers too, because this is something we are trying for the first time and seems a little out-of-the-box, so feedback welcome!
There is a javascript library called sql.js that can do exactly what you want. In your case, you would use it like that
const SQL = await initSqlJs(options);
const fetched = await fetch("/path/to/database.sqlite");
const buf = await fetched.arrayBuffer();
const db = new SQL.Database(new Uint8Array(buf));
const contents = db.exec("SELECT * FROM my_table");
// contents is now [{columns:['col1','col2',...], values:[[first row], [second row], ...]}]
See the documentation on sql-js.github.io/sql.js/documentation/
I can not tell the best, but one: Write a JavaScript SQLite reader library yourself. This will be a tedious task, but I am sure it can be done. Some cool folks have done pdf.js, which is a JavaScript renderer for PDF files, which are also binary BLOB's like SQLite files are.
You will most probably start with the FileReader API to walk thru the SQLite file, then create some in-memory representation of the content, which your chart tool can use.
Disclaimer: You probably want to solve your initial problem with another solution, as proposed by others, but this answers your question.

Categories

Resources