I wish to calculate a file's checksum locally with JS and have been searching for examples of how to accomplish this using the Streams Api and pipeTo, but have only found examples with Node.js which I am not familiar with. What I'm looking for is something like this:
var stream = file.stream();
var hash = CryptoJS.algo.SHA1.create();
await stream.pipeTo(hash);
console.log(hash);
This code does not work since CryptoJS doesn't seem to create WriteableStream. Can I wrap what CryptoJS creates in some kind of shim or sink? Other suggestions?
I could read and append buffers incrementally of course, but the code becomes a bit messy so I was hoping I could leverage pipeTo to avoid complicating the code.
Related
This is a topic where I can't seem to find the answer on the Node.js docs (I know it's possible because of libraries like exif), nor can I find an answer on the internet without everyone saying to just use a library.
I don't want to use a library, so I want to do this natively and learn more about reading file metadata, and maybe eventually updating the metadata too while building my own mini-tool.
If I run something like fs.statSync() I can get generic metadata that returns in the Stats object; but, in my case, I'm looking for all the other metadata, NOT just the basic file info like size, birthtime, etc.
I want the other metadata like dimensions, date taken, and especially things you'd see in image, video, or audio files.
Maybe there's something like:
const deepMetaData = fs.readFileSync().getMetaDataAsString();
console.info(/Date Taken/.test(deepMetaData)); // true
or
const deepMetaData = fs.createReadStream().buffer().toString();
const dateTaken = deepMetaData.match(/Date Taken: (\d{4}-\d{2}-\d{2})/)[1];
console.info(dateTaken);
If I need to work with buffers, streams, whatever, instead of a string output, that's cool too. Ideally something synchronous. So if there's a simple example someone could provide of how to read that kind of meta data without a library, I'll at least be able to look up the methods used from that to understand more later and leverage the docs associated with whatever approach. Thank you!
Nodejs fs functions like fs.statSync() provide OS level metadata on the file only (such as createDate, modificationDate, file size, etc...). These are properties of the file in the file system. These do NOT have anything at all to do with the actual data of the file itself.
When you talk about EXIF (for a photo), this is parsed from the file data itself. To know about that type of data, you must read and parse at least the beginning of the file and you must be able to recognize and understand all the different file formats that you might encounter. For photos, this would include JPEG, PNG, HEIC, GIF, etc... Each of those have different file formats and will require unique code for understanding the metadata embedded in the file.
Nodejs does not have support for any of that built-in.
So, it will take custom code for each file type. If you further want to include other types of files like videos, you need to extend your list of different file types you can read, parse and understand. For the depth of files you're talking about this is a big job, particular when it comes to testing against all the different variants of files and metadata that exist out in the wild.
I personally would be fine with implementing my own code for one particular file type like JPEG, but if I was tasked with supporting dozens of types of files and particularly if tasked with supporting the wide range of video file formats, I'd immediately seek out help from existing libraries that have already done all the time consuming work to research, write and test how to properly read and understand all the variants.
I know it's possible because of libraries like exif
This is an example of a library that reads the beginning of the image file, parses it according to the expected format and knows how to interpret all the possible tags that can be in the EXIF header and what they all mean.
So if there's a simple example someone could provide of how to read that kind of meta data without a library
Go study the code for the EXIF library and see how it works. If you're going to implement it yourself, that's how you have to do it. I'm still not sure why you'd avoid using working libraries if they already exist. That is one of the biggest advantages of the nodejs ecosystem - you can build on all the open source code that already exists without reimplementing it all from scratch yourself and spend your coding time on parts of your problem that someone else has not already implemented.
how would one read that metadata using node?
You literally have to read the data from the file (usually at the start of the file). You can use any of the mechanisms that the fs module provides. For example, you can use fs.createReadStream() and then stream in the file, parsing and interpreting it as data arrives and then stop the stream when you get past the end of the metadata. Of, you can open a file handle using fs.open() and use fs.read() to read chunks of the file until you have read enough to have all the metadata.
You HAVE an example sitting right in front of you of code that does this in the EXIF library on NPM that you already seem to know about. Just go examine its code. The code is ALL there.
I'm just looking for a simple answer on getting that info, even if it's a blob of strings.
This is perhaps your main problem. There is no simple answer to get that info and it doesn't just exist as a blob of strings. These files are sometimes binary files (for space efficiency reasons). You have to learn how to read and parse binary data. Go study the code in the EXIF library and see what it is already doing and you can learn from that. There is no better example to start with.
But, for a simple example using the heic filetype, this will grab the first 5000 characters of the file's metadata, which can then be searched:
const fileDescriptor = fs.openSync(absPathToHeicPhoto);
const charCount = 5000;
const buffer = Buffer.alloc(charCount);
const headerBytes = fs.readSync(fileDescriptor, buffer, 0, charCount);
const bufferAsStr = buffer.toString('utf8', 0, charCount);
console.info(/\d{4}:\d{2}:\d{2}/.test(bufferAsStr));
FYI, I looked at the code for this EXIF library on NPM and it's poorly implemented. It uses fs.readFile() to load the ENTIRE image into RAM (even though it only needs a fraction of the data at the start of the file). This is a poor implementation for this reason (memory and disk inefficient).
But, it does have a method called processImage and one called extractExifData that process the binary data of the file to parse out the EXIF info. These are links to its actual code. You can start learning there.
FYI, as a photographer, I use a command line program called exiftool that will dump exif info to stdout or to a file for many images. As a different approach, you could just run that tool from your nodejs program (using the child_process module and capture its output and use that output, letting it do the hard work you just operate on the generated output.
In JavaScript project, I want to decompress an object compressed by zlib(Zopfli.js) and I'm trying it with pako.min.js.
However, the example at the official site of pako uses require function which does not exist in JavaScript. Maybe Node.js has this but I'm afraid it would take a lot of time and pains to combine this JavaScript project with Node.js, because I know nothing about Node.js.
Is it any way to get through with this, or another way to decompress the object?
Any information would be appreciated.
What I've already tried
I've already tried zlib.js library for decomressing, but the result is catching the error below which I couldn't find any solution:
const compressed = dataCompressedByZlib;
const inflate = new Zlib.Inflate(compressed);
const plain = inflate.decompress();// -> input buffer is broken
You may use pako.js from here for client side javascript and import it as -
<script type="text/javascript" src="pako.js"></script>
inside your html, as mentioned here-
https://stackoverflow.com/a/22675078/7895283
As seen here: https://anidiots.guide/coding-guides/storing-data-in-a-json-file.html
It shows you how to create a point system in discord.js. But what caught my eye is how they used let points = JSON.parse(fs.readFileSync("./points.json", "utf8"));
to read the file. So i am trying to learn how to make a database where i get the points plus money that can be redeemed daily and shared. kinda like a bank. but i don't know how to do that. If anyone could help me with a hastebin link or anywhere i can learn in depth how to use the JSON.parse(fs.readFileSync("./points.json", "utf8")); thing.
and if you want to see my bot in action don't hesitate to use https://discord.me/knut
The line you're asking about is made of two call to the functions JSON.parse and fs.readFileSync.
JSON.parse. This function receives a bunch of text and transform it (parse it) into a javascript object. It can be very useful when you want to, for example, build something dynamically based on the content of a file. Maybe w3school is a good place to start looking for info about it.
Example
var string = "{id: 4, name:'Volley'}"
var parseObject = JSON.parse(string)
console.log(parseObject.id); //4
console.log(parseObject.name); //Volley
fs.readFileSync. As you probably know, most of the functions in javascript and node.js are asynchronous, that is, instead of calling and get the returned value, you have to define a callback within which you would use the value you want. fs.readFileSync is just the synchronous version of fs.readFile(callback), which returns the content of the read file. Here you have the docs about that function.
These functions are actually simple to use, you should struggle in finding some examples or trying them by yourself.
If you want to imitate what the tutorial said, then you would need to define another file, with the money of each point, or edit the first file if you can, so you could an object like
var point_and_money = {
points : [...],
money : [....]
}
or two objects with the separate information
var points = JSON.parse(fs.readFileSync("./points.json", "utf8"));
var money = JSON.parse(fs.readFileSync("./money.json", "utf8"));
Hope I gave you a hint about what you asked
not really sure what you are trying to achieve?
JSON.parse(fs.readFileSync("./points.json", "utf8"));
This line reads a json file and parse it to a Javascript-Method. Nothing less and nothing more. this can also be done in Nodejs via
var points = require('./points.json');
You mentioned something like how to do a database? Basically I am not sure if you want to develop a database or better use an existing one. Look for MongoDB, SQLLite,IndexedDB, etc. There a tons of database for almost every use case.
Remember that your line of code reads synchronous in a blocking way when the file gets large.
And when multiple users would access the file at the same time you need to handle this somehow. So definitely would suggest to look for some existing database solution and have more time to focus on your business logic.
I hope I understand your question correct and my answer helps.
Maybe this one is also a good question to start: Lightweight Javascript DB for use in Node.js
I'm working on a JavaScript app and have so far entered all my strings as plain text.
This is starting to feel really hacky (I'm used to gettext) so I'd prefer to wrap them all in something like {{translatable_string}} and have a gulp task just search/replace them all during the build step.
So, my question is; is there a generic (no framework-specific like angular-gettext or something like that) gettext replacer out there?
Obviously it doesn't even have to be connected to JavaScript in any way, you should be able to run it on any file type and have {{translatable_string}}:s be translated.
You may want to look into using gulp-replace. As they explained in this answer, you should be able to use it to find and replace any string that you want in the stream.
I suggest a database of strings for your translations if dynamic generation of page content is possible for your app. Starting with English or whichever is normal but the need to localize content is a tough issue without a robust system. A simple MongoDB table can be used to store the content, and when the app needs an interface it can be loaded with the right localized strings. As a for instance:
if(err) alert("Please turn off caps lock");
could become:
if(err) alert(Please_turn_off_caps_lock.English);
If you are needing to build static pages with gulp, a database in conjunction with gulp-replace sounds interesting. Using gulp-data to call up and package the strings, you can then feed it to gulp-replace and alter the files. The extensible nature of databases or document stores enable you to expand your localization without hacking on individual files or trees all the time.
Try gulp-gettext-parser.
var gettext = require("gulp-gettext-parser");
var rename = require("gulp-rename");
gulp.task("gettext", function() {
return gulp.src("src/**/*.js")
.pipe(gettext())
.pipe(rename("bundle.po"))
.pipe(gulp.dest("dist/"));
});
Perhaps what you need is mustache.js, take a look: https://github.com/janl/mustache.js/
I'm not used to work with mustache, but I had to do some updates in a project done with it, and I was surprised the capabilities it have.
If you're familiar with jade (now renamed to pug), you'll find is something similar but at the end, you're not forced to generate only html files, you cand generate any kind of text file.
This blog could be helpful to understand the differences between some other templating languages over Nodejs: https://strongloop.com/strongblog/compare-javascript-templates-jade-mustache-dust/
For one of our Insights platform, we plan to generate summary SQLite3 databases in the background and let it be rendered on the browser as charts. Currently, we are intending to a server-side endpoint that will service the data requirement.
We are looking to optimize this further by eliminating the server-side endpoint altogether. We are fine (from a security perspective) to expose the SQLite3 directly on S3 and have a javascript module read and generate the charts.
The SQLite3 files are expected to fairly small - perhaps 4-6 columns and perhaps 10-500 rows of data, and all of them containing one table only. Test runs indicate file sizes of less than 15KB.
We don't intend to write or manipulate the SQLite3 on the browser.
We don't need to cache it on the browser as a WebSQL or an IndexedDB form, but we are ok with using them if that is what is needed.
From my web searches, We are unable to find a Javascript library that can read a SQLite3 file and query it for results. If you know of any javascript libraries that can do this, then please let us know.
On the other hand, if you think that we shouldn't be doing this for whatever reason, then please throw them as comments/answers too, because this is something we are trying for the first time and seems a little out-of-the-box, so feedback welcome!
There is a javascript library called sql.js that can do exactly what you want. In your case, you would use it like that
const SQL = await initSqlJs(options);
const fetched = await fetch("/path/to/database.sqlite");
const buf = await fetched.arrayBuffer();
const db = new SQL.Database(new Uint8Array(buf));
const contents = db.exec("SELECT * FROM my_table");
// contents is now [{columns:['col1','col2',...], values:[[first row], [second row], ...]}]
See the documentation on sql-js.github.io/sql.js/documentation/
I can not tell the best, but one: Write a JavaScript SQLite reader library yourself. This will be a tedious task, but I am sure it can be done. Some cool folks have done pdf.js, which is a JavaScript renderer for PDF files, which are also binary BLOB's like SQLite files are.
You will most probably start with the FileReader API to walk thru the SQLite file, then create some in-memory representation of the content, which your chart tool can use.
Disclaimer: You probably want to solve your initial problem with another solution, as proposed by others, but this answers your question.