How to read worldedit .schem file with nodejs - javascript

I'm trying to make a node application that uploads .schem files but I don't know how to use the fs import to read the file without modifying it. When I try to read and write the file again the data gets modified and world edit in minecraft can't read it.
Can someone please help me with this?

As you can see here the .schem files are saved using NBT, and its structure.
the nbt format is using binary format (comparing to json that you can read and modify easily). so you probably want to use an existing library, i have never programed in node.js but i found this, if it helps

Related

How can I read a file's metadata in Node.js, beyond what the fs.statSync provides without using a library?

This is a topic where I can't seem to find the answer on the Node.js docs (I know it's possible because of libraries like exif), nor can I find an answer on the internet without everyone saying to just use a library.
I don't want to use a library, so I want to do this natively and learn more about reading file metadata, and maybe eventually updating the metadata too while building my own mini-tool.
If I run something like fs.statSync() I can get generic metadata that returns in the Stats object; but, in my case, I'm looking for all the other metadata, NOT just the basic file info like size, birthtime, etc.
I want the other metadata like dimensions, date taken, and especially things you'd see in image, video, or audio files.
Maybe there's something like:
const deepMetaData = fs.readFileSync().getMetaDataAsString();
console.info(/Date Taken/.test(deepMetaData)); // true
or
const deepMetaData = fs.createReadStream().buffer().toString();
const dateTaken = deepMetaData.match(/Date Taken: (\d{4}-\d{2}-\d{2})/)[1];
console.info(dateTaken);
If I need to work with buffers, streams, whatever, instead of a string output, that's cool too. Ideally something synchronous. So if there's a simple example someone could provide of how to read that kind of meta data without a library, I'll at least be able to look up the methods used from that to understand more later and leverage the docs associated with whatever approach. Thank you!
Nodejs fs functions like fs.statSync() provide OS level metadata on the file only (such as createDate, modificationDate, file size, etc...). These are properties of the file in the file system. These do NOT have anything at all to do with the actual data of the file itself.
When you talk about EXIF (for a photo), this is parsed from the file data itself. To know about that type of data, you must read and parse at least the beginning of the file and you must be able to recognize and understand all the different file formats that you might encounter. For photos, this would include JPEG, PNG, HEIC, GIF, etc... Each of those have different file formats and will require unique code for understanding the metadata embedded in the file.
Nodejs does not have support for any of that built-in.
So, it will take custom code for each file type. If you further want to include other types of files like videos, you need to extend your list of different file types you can read, parse and understand. For the depth of files you're talking about this is a big job, particular when it comes to testing against all the different variants of files and metadata that exist out in the wild.
I personally would be fine with implementing my own code for one particular file type like JPEG, but if I was tasked with supporting dozens of types of files and particularly if tasked with supporting the wide range of video file formats, I'd immediately seek out help from existing libraries that have already done all the time consuming work to research, write and test how to properly read and understand all the variants.
I know it's possible because of libraries like exif
This is an example of a library that reads the beginning of the image file, parses it according to the expected format and knows how to interpret all the possible tags that can be in the EXIF header and what they all mean.
So if there's a simple example someone could provide of how to read that kind of meta data without a library
Go study the code for the EXIF library and see how it works. If you're going to implement it yourself, that's how you have to do it. I'm still not sure why you'd avoid using working libraries if they already exist. That is one of the biggest advantages of the nodejs ecosystem - you can build on all the open source code that already exists without reimplementing it all from scratch yourself and spend your coding time on parts of your problem that someone else has not already implemented.
how would one read that metadata using node?
You literally have to read the data from the file (usually at the start of the file). You can use any of the mechanisms that the fs module provides. For example, you can use fs.createReadStream() and then stream in the file, parsing and interpreting it as data arrives and then stop the stream when you get past the end of the metadata. Of, you can open a file handle using fs.open() and use fs.read() to read chunks of the file until you have read enough to have all the metadata.
You HAVE an example sitting right in front of you of code that does this in the EXIF library on NPM that you already seem to know about. Just go examine its code. The code is ALL there.
I'm just looking for a simple answer on getting that info, even if it's a blob of strings.
This is perhaps your main problem. There is no simple answer to get that info and it doesn't just exist as a blob of strings. These files are sometimes binary files (for space efficiency reasons). You have to learn how to read and parse binary data. Go study the code in the EXIF library and see what it is already doing and you can learn from that. There is no better example to start with.
But, for a simple example using the heic filetype, this will grab the first 5000 characters of the file's metadata, which can then be searched:
const fileDescriptor = fs.openSync(absPathToHeicPhoto);
const charCount = 5000;
const buffer = Buffer.alloc(charCount);
const headerBytes = fs.readSync(fileDescriptor, buffer, 0, charCount);
const bufferAsStr = buffer.toString('utf8', 0, charCount);
console.info(/\d{4}:\d{2}:\d{2}/.test(bufferAsStr));
FYI, I looked at the code for this EXIF library on NPM and it's poorly implemented. It uses fs.readFile() to load the ENTIRE image into RAM (even though it only needs a fraction of the data at the start of the file). This is a poor implementation for this reason (memory and disk inefficient).
But, it does have a method called processImage and one called extractExifData that process the binary data of the file to parse out the EXIF info. These are links to its actual code. You can start learning there.
FYI, as a photographer, I use a command line program called exiftool that will dump exif info to stdout or to a file for many images. As a different approach, you could just run that tool from your nodejs program (using the child_process module and capture its output and use that output, letting it do the hard work you just operate on the generated output.

Node not reading full contents of file

I'm trying to get it to read & parse a json file to update it, but it's not reading the full file, it's stopping after a lot of the file and just not reading any more of it. It's a massive json file because I can't really store it as anything else, besides multiple json files.
The code of CacheManager is here
The size of what it read is 143,360, and the actual size of the file is 153,840. I've never really ran into the issue, so I have no clue how to remedy it. I'm using fs-extra in the code, but I've verified that the same issue happens with the built-in fs module. I've printed out the content of what it got as well, so I can see that it is reading the file, and it is reading the right content, it's just not getting all of it. I'll link the right content and what it's getting. It's cut off at the end, you can see the part of the json for the md5. The code writing it to the file is just writing the raw content of the read file here (look at the part below the first screenshot to see the regular code)
If the issue is caused by the size of the file, you may look in some streaming parsing alternatives to standard JSON parse (like https://www.npmjs.com/package/stream-json).
Note: I'll check and let you know
Edit for the reader: so far it seems some kind of race and SO caching, discussion in the comments.

How to upload and convert XLSX file to JSON using ember.js

I am trying to allow the user to upload an XLSX file to be converted to a JSON or CSV file to be parsed through on the back-end. I am using node.js, and tried several packages including the read-excel-file
(https://github.com/catamphetamine/read-excel-file/blob/master/README.md)
readXlsxFile(file).then(function(data) {
let jsondata = JSON.parse(data);
-do something with jsondata-
});
The usual place to look for add-ons would be Ember Observer but the options available seem have a status of Work In Progress - they might be a useful place to look though to get some inspiration for how to proceed.
There are plenty of options on npm. You can import one of those into your project using the new add-on ember-auto-import or, if you'd rather do the hard work yourself, the Ember guides provide some guidance on manually importing.
You can use js-xlsx. Add it as bower dependecy and add its imports to your ember-cli-build file as:
app.import('bower_components/js-xlsx/dist/jszip.js');
app.import('bower_components/js-xlsx/dist/xlsx.min.js');
Handle it as the documentation's parsing-workbooks section shows. (handleFile function is explaining it well.)

How to get LanguageModel JS File?

I have been using PocketSphinx.js for speech recognition on my website. I have Downloaded language model file (.dmp). But their code uses JS file for Language model. I dont know where to get JS file. Help me out. Thanks.
As per documentation, you can convert dmp file to js and load it afterwards, see the section https://github.com/syl22-00/pocketsphinx.js/#i-embedding-the-files-into-one-large-javascript-file. Something like
$ cmake -DEMSCRIPTEN=1 -DCMAKE_TOOLCHAIN_FILE=path_to_emscripten/cmake/Modules/Platform/Emscripten.cmake -LM_BASE=/lm/dmp/folder/name -LM_FILES=model.dmp ..
Also check an alternative section https://github.com/syl22-00/pocketsphinx.js#ii-package-model-files-outside-the-main-javascript where you can do:
# python .../emscripten/tools/file_packager.py .../pocketsphinx.js/build/pocketsphinx.js --embed model.dmp --js-output=model.dmp.js
Overall, you need smaller models, big models from CMUSPHINX downloads are too large

I want to unzip a file with javascript from a file or blob object

I would like to read a pptx file in javascript, so I would like to unzip it and read the content in memory. I don't want to store the file first on a server. I want to choose a file with a input type file and just use the file of the input-element and read it binary or something like that.
I found a lot of libraries to unzip zip-files from url, I tried to look at the code but I couldn't figure it out to use it for a blob or byte array.
I can read some stuff like the things described here: http://en.wikipedia.org/wiki/ZIP_%28file_format%29#File_headers
But I don't know how deflating works on byte- or bit-level.
(You've said you want to use an input element, so I'm guessing this is browser-based JavaScript.)
Your first step will be to use the File API to read the file as a binary string. See my answer to this other question for an example of that. Then you'll need to find a library. A quick search discovered this one that implements both inflate and deflate. (I don't have personal experience using it, just found it in an answer to this other question.)
Naturally this will only work on quite modern browsers that support the File API. Otherwise, you have no option but to push the file to a server and do the work there, since you can't access the content of the file in the browser without the File API.

Categories

Resources