NodeJS exec with binary from and to the process - javascript

I'm trying to write a function, that would use native openssl to do some RSA heavy-lifting for me, rather than using a js RSA library. The target is to
Read binary data from a file
Do some processing in the node process, using JS, resulting in a Buffer containing binary data
Write the buffer to the stdin stream of the exec command
RSA encrypt/decrypt the data and write it to the stdout stream
Get the input data back to a Buffer in the JS-process for further processing
The child process module in Node has an exec command, but I fail to see how I can pipe the input to the process and pipe it back to my process. Basically I'd like to execute the following type of command, but without having to rely on writing things to files (didn't check the exact syntax of openssl)
cat the_binary_file.data | openssl -encrypt -inkey key_file.pem -certin > the_output_stream
I could do this by writing a temp file, but I'd like to avoid it, if possible. Spawning a child process allows me access to stdin/out but haven't found this functionality for exec.
Is there a clean way to do this in the way I drafted here? Is there some alternative way of using openssl for this, e.g. some native bindings for openssl lib, that would allow me to do this without relying on the command line?

You've mentioned spawn but seem to think you can't use it. Possibly showing my ignorance here, but it seems like it should be just what you're looking for: Launch openssl via spawn, then write to child.stdin and read from child.stdout. Something very roughly like this completely untested code:
var util = require('util'),
spawn = require('child_process').spawn;
function sslencrypt(buffer_to_encrypt, callback) {
var ssl = spawn('openssl', ['-encrypt', '-inkey', ',key_file.pem', '-certin']),
result = new Buffer(SOME_APPROPRIATE_SIZE),
resultSize = 0;
ssl.stdout.on('data', function (data) {
// Save up the result (or perhaps just call the callback repeatedly
// with it as it comes, whatever)
if (data.length + resultSize > result.length) {
// Too much data, our SOME_APPROPRIATE_SIZE above wasn't big enough
}
else {
// Append to our buffer
resultSize += data.length;
data.copy(result);
}
});
ssl.stderr.on('data', function (data) {
// Handle error output
});
ssl.on('exit', function (code) {
// Done, trigger your callback (perhaps check `code` here)
callback(result, resultSize);
});
// Write the buffer
ssl.stdin.write(buffer_to_encrypt);
}

You should be able to set encoding to binary when you make a call to exec, like..
exec("openssl output_something_in_binary", {encoding: 'binary'}, function(err, out, err) {
//do something with out - which is in the binary format
});
If you want to write out the content of "out" in binary, make sure to set the encoding to binary again, like..
fs.writeFile("out.bin", out, {encoding: 'binary'});
I hope this helps!

Related

How to read from one stream and write to several at once?

Suppose I have a readable stream, e.g. request(URL). And I want to write its response on the disk via fs.createWriteStream() and piping with the request. But at the same time I want to calculate a checksum of the downloading data via crypto.createHash() stream.
readable -+-> calc checksum
|
+-> write to disk
And I want to do it on the fly, without buffering an entire response in memory.
It seems that I can implement it using oldschool on('data') hook. Pseudocode below:
const hashStream = crypto.createHash('sha256');
hashStream.on('error', cleanup);
const dst = fs.createWriteStream('...');
dst.on('error', cleanup);
request(...).on('data', (chunk) => {
hashStream.write(chunk);
dst.write(chunk);
}).on('end', () => {
hashStream.end();
const checksum = hashStream.read();
if (checksum != '...') {
cleanup();
} else {
dst.end();
}
}).on('error', cleanup);
function cleanup() { /* cancel streams, erase file */ };
But such approach looks pretty awkward. I tried to use stream.Transform or stream.Writable to implement something like read | calc + echo | write but I'm stuck with the implementation.
Node.js readable streams have a .pipe method which works pretty much like the unix pipe-operator, except that you can stream js objects as well as just strings of some type.
Here's a link to the doc on pipe
An example of the use in your case could be something like:
const req = request(...);
req.pipe(dst);
req.pipe(hash);
Note that you still have to handle errors per stream as they're not propagated and the destinations are not closed if the readable errors.

IPC on Terminal Server with C/C++ an nw.js/node.js/node-native-module (C++)?

I have a Win32-DLL (C++) which is loaded as a plugin in another application. The DLL starts a nw.js instance (ShellExecuteEx and SEE_MASK_NOCLOSEPROCESS) and ends it at DLL unloading (by the hInstance of ShellExecuteEx). I need a way to send a string (plain ansi) to the nw-process and retrieve an answer (also string). The old way was a simple http-request with the response in the body. But the environment changes during the development, the "package" app-dll-nw runs multiple times by the same user and multiple users run on the same machine (terminal server). So port listing is "impossible" (yeah random ports or singleton nw, but no).
I found different ways:
socket - port listing problem
wm_copydata/wm_... - need a custom nw-plugin with hidden window (no native nw way); no request-response-system
RPC - port listing problem
DDE - no native javascript way (found a module, which uses .net); In my old delphi days DDE was a not so simple task and it failed multiple times with no logic.
shared memory - no experience; expectations: asynchronous, trigger?, no native javascript way
shared file - no experience; expectations: asynchronous, trigger (watcher on file change) but problems with synchronization, native js way possible
named pipe - no experience; expectations: win32-api and like a chat system (in-pipe [send broadcast] and out-pipe [receive broadcast], or both in one)? If yes, I can use one name about all instances and use unique identifiers and wait for the right answer.
What is a nice and simple way to communicate like the http-way but w/o networking?
Update 1: The node module "net" is able to create a server for a named pipe. The first test, sending a string from the dll to nw, was successful.
var server = net.createServer(function(stream) {
stream.on('data', function(c) {
console.log('data:', c.toString());
});
stream.on('end', function() {
//server.close();
});
});
server.listen('\\\\.\\pipe\\MyAppDynamicGUID');
Update 2 - My Solution
With named pipe and a simplified version of https://msdn.microsoft.com/en-us/library/windows/desktop/aa365592(v=vs.85).aspx I found a working methode.
Server in nw.js:
var server = net.createServer(function(req) {
req.on('data', function(c) {
console.log(c.toString());
req.write('123|Hello World', 'ascii');
});
});
server.listen('\\\\.\\pipe\\MyAppDynamicGUID');
The client in C++ (no permanent connection, strange string handling, simplified error handling):
static std::string PipenameA = "\\\\.\\pipe\\MyAppDynamicGUID";
#define BUFSIZE 512
std::string SendPipeRequestA(std::string sRequest) {
DWORD dwToWrite, dwWritten, dwRead;
BOOL bSuccess;
char chBuf[BUFSIZE];
std::vector<char> buffer;
HANDLE hPipe = CreateFileA(PipenameA.c_str(), GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL);
if (hPipe == INVALID_HANDLE_VALUE)
return "-1|Pipe-Error 1 (connect)";
dwToWrite = (lstrlenA(sRequest.c_str())+1)*sizeof(char);
bSuccess = WriteFile(hPipe, sRequest.c_str(), dwToWrite, &dwWritten, NULL);
if (!bSuccess)
return "-1|Pipe-Error 2 (write)";
do {
bSuccess = ReadFile(hPipe, chBuf, BUFSIZE*sizeof(char), &dwRead, NULL);
if (!bSuccess && GetLastError() != ERROR_MORE_DATA)
break;
buffer.insert(buffer.end(), chBuf, chBuf + dwRead);
} while (!bSuccess);
std::string sResponse(&buffer[0]);
CloseHandle(hPipe);
return sResponse.c_str();
}
// Jonny
The answers you will get will be opinion based, be aware of that.
you can inject the data into the JS module as command line argument
for example
start nw.js MyData
and get it insinde the JS with process.argv.
now, sending the data back to the C++ executables/DLLs is a bit tricky.
if you shell-execute the process, you can have the handle to it.
you can print the data into the stdout from the JS part , and read it in the native app by getting the STDOUT handle from the process handle.
Register your nw.js app with a custom url should be an elegant way.
Such as "github://", "thunder://", "twitter://"
On windows you may have a look at:
https://msdn.microsoft.com/en-us/library/aa767914(v=vs.85).aspx
With custom url you can take simple arguments to nw.js at single-instance mode. See:
https://github.com/nwjs/nw.js/wiki/Handling-files-and-arguments#open-file-with-existing-app
If more data required maybe base64 can help, or even more by LZ-String compress method.

Node.js Stream and child process - Strange Behavior

I have got a c program,which reads integer values from stdin. I wrote a nodejs program to execute the c file, and the nodejs program will read a text file(containg numbers in multiple lines) and pipe this data to stdin of the child process.
The problem is, if the no of inputs in the txt file is less than the expected number then the child process will be supplied with value 0. I want the child process to wait until the data is received.
c program
#include<stdio.h>
int main(){
int a;
printf("hekllowworls");
scanf("%d",&a);
printf("%d",a);
scanf("%d",&a);
printf("%d",a);
scanf("%d",&a);
printf("%d",a);
}
Node JS Program
var fs = require('fs'),
cp = require('child_process');
stream = fs.createReadStream('myfile.txt');
var obj = cp.spawn('/home/arju/Desktop/exec/a.out');
stream.pipe(obj.stdin);
obj.stdout.on('data',function(data){
console.log(data.toString());
});
obj.on('error',function(err){
console.log(err);
});
TextFile - Myfile.txt
10
20
According to man scanf
The value EOF is returned if the end of input is reached before either
the first successful conversion or a matching failure occurs. EOF is
also returned if a read error occurs, in which case the error indicator
for the stream (see ferror(3)) is set, and errno is set indicate the
error.
I feel that you should use something like that:
r = scanf("%d",&a);
if (r != EOF) { ...
Logically, simple wait for data could look like this:
while(r = scanf("%d",&a)) {
if (r == EOF) continue;
printf("%d",a);
}
EDIT
As i understand, you should use unbuffered IO operations, like read/write syscals. Try this:
int main(){
int n;
char buf[255];
while(1) {
while((n = read(0,buf,sizeof(buf))) != 0){
write(1,buf,n);
}
}
}
Get the info from: write() to stdout and printf output not interleaved?, good answer, check it out.
Your C program reads value from buffer. In my first example it awaits data to fill the output buffer, so i had no output.
EDIT 2: How to turn the buffer off
With small modifications of code, you can use fflush(stdout) after each printf call.
If you can't edit the code, take a look at this answer from unix.stackexchange: https://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe, there are two good solutions. (i'd prefer the second, because it is from coreutils).

Stream file uploaded with Express.js through gm to eliminate double write

I'm using Express.js and have a route to upload images that I then need to resize. Currently I just let Express write the file to disk (which I think uses node-formidable under the covers) and then resize using gm (http://aheckmann.github.com/gm/) which writes a second version to disk.
gm(path)
.resize(540,404)
.write(dest, function (err) { ... });
I've read that you can get a hold of the node-formidable file stream before it writes it to disk, and since gm can accept a stream instead of just a path, I should be able to pass this right through eliminating the double write to disk.
I think I need to override form.onPart but I'm not sure where (should it be done as Express middleware?) and I'm not sure how to get a hold of form or what exactly to do with the part. This is the code skeleton that I've seen in a few places:
form.onPart = function(part) {
if (!part.filename) { form.handlePart(part); return; }
part.on('data', function(buffer) {
});
part.on('end', function() {
}
}
Can somebody help me put these two pieces together? Thanks!
You're on the right track by rewriting form.onPart. Formidable writes to disk by default, so you want to act before it does.
Parts themselves are Streams, so you can pipe them to whatever you want, including gm. I haven't tested it, but this makes sense based on the documentation:
var form = new formidable.IncomingForm;
form.onPart = function (part) {
if (!part.filename) return this.handlePart(part);
gm(part).resize(200, 200).stream(function (err, stdout, stderr) {
stdout.pipe(fs.createWriteStream('my/new/path/to/img.png'));
});
};
As for the middleware, I'd copypaste the multipart middleware from Connect/Express and add the onPart function to it: http://www.senchalabs.org/connect/multipart.html
It'd be a lot nicer if formidable didn't write to disk by default or if it took a flag, wouldn't it? You could send them an issue.

Unzipping files

I want to display OpenOffice files, .odt and .odp at client side using a web browser.
These files are zipped files. Using Ajax, I can get these files from server but these are zipped files. I have to unzip them using JavaScript, I have tried using inflate.js, http://www.onicos.com/staff/iz/amuse/javascript/expert/inflate.txt, but without success.
How can I do this?
I wrote an unzipper in Javascript. It works.
It relies on Andy G.P. Na's binary file reader and some RFC1951 inflate logic from notmasteryet. I added the ZipFile class.
working example:
http://cheeso.members.winisp.net/Unzip-Example.htm (dead link)
The source:
http://cheeso.members.winisp.net/srcview.aspx?dir=js-unzip (dead link)
NB: the links are dead; I'll find a new host soon.
Included in the source is a ZipFile.htm demonstration page, and 3 distinct scripts, one for the zipfile class, one for the inflate class, and one for a binary file reader class. The demo also depends on jQuery and jQuery UI. If you just download the js-zip.zip file, all of the necessary source is there.
Here's what the application code looks like in Javascript:
// In my demo, this gets attached to a click event.
// it instantiates a ZipFile, and provides a callback that is
// invoked when the zip is read. This can take a few seconds on a
// large zip file, so it's asynchronous.
var readFile = function(){
$("#status").html("<br/>");
var url= $("#urlToLoad").val();
var doneReading = function(zip){
extractEntries(zip);
};
var zipFile = new ZipFile(url, doneReading);
};
// this function extracts the entries from an instantiated zip
function extractEntries(zip){
$('#report').accordion('destroy');
// clear
$("#report").html('');
var extractCb = function(id) {
// this callback is invoked with the entry name, and entry text
// in my demo, the text is just injected into an accordion panel.
return (function(entryName, entryText){
var content = entryText.replace(new RegExp( "\\n", "g" ), "<br/>");
$("#"+id).html(content);
$("#status").append("extract cb, entry(" + entryName + ") id(" + id + ")<br/>");
$('#report').accordion('destroy');
$('#report').accordion({collapsible:true, active:false});
});
}
// for each entry in the zip, extract it.
for (var i=0; i<zip.entries.length; i++) {
var entry = zip.entries[i];
var entryInfo = "<h4><a>" + entry.name + "</a></h4>\n<div>";
// contrive an id for the entry, make it unique
var randomId = "id-"+ Math.floor((Math.random() * 1000000000));
entryInfo += "<span class='inputDiv'><h4>Content:</h4><span id='" + randomId +
"'></span></span></div>\n";
// insert the info for one entry as the last child within the report div
$("#report").append(entryInfo);
// extract asynchronously
entry.extract(extractCb(randomId));
}
}
The demo works in a couple of steps: The readFile fn is triggered by a click, and instantiates a ZipFile object, which reads the zip file. There's an asynchronous callback for when the read completes (usually happens in less than a second for reasonably sized zips) - in this demo the callback is held in the doneReading local variable, which simply calls extractEntries, which
just blindly unzips all the content of the provided zip file. In a real app you would probably choose some of the entries to extract (allow the user to select, or choose one or more entries programmatically, etc).
The extractEntries fn iterates over all entries, and calls extract() on each one, passing a callback. Decompression of an entry takes time, maybe 1s or more for each entry in the zipfile, which means asynchrony is appropriate. The extract callback simply adds the extracted content to an jQuery accordion on the page. If the content is binary, then it gets formatted as such (not shown).
It works, but I think that the utility is somewhat limited.
For one thing: It's very slow. Takes ~4 seconds to unzip the 140k AppNote.txt file from PKWare. The same uncompress can be done in less than .5s in a .NET program. EDIT: The Javascript ZipFile unpacks considerably faster than this now, in IE9 and in Chrome. It is still slower than a compiled program, but it is plenty fast for normal browser usage.
For another: it does not do streaming. It basically slurps in the entire contents of the zipfile into memory. In a "real" programming environment you could read in only the metadata of a zip file (say, 64 bytes per entry) and then read and decompress the other data as desired. There's no way to do IO like that in javascript, as far as I know, therefore the only option is to read the entire zip into memory and do random access in it. This means it will place unreasonable demands on system memory for large zip files. Not so much a problem for a smaller zip file.
Also: It doesn't handle the "general case" zip file - there are lots of zip options that I didn't bother to implement in the unzipper - like ZIP encryption, WinZip encryption, zip64, UTF-8 encoded filenames, and so on. (EDIT - it handles UTF-8 encoded filenames now). The ZipFile class handles the basics, though. Some of these things would not be hard to implement. I have an AES encryption class in Javascript; that could be integrated to support encryption. Supporting Zip64 would probably useless for most users of Javascript, as it is intended to support >4gb zipfiles - don't need to extract those in a browser.
I also did not test the case for unzipping binary content. Right now it unzips text. If you have a zipped binary file, you'd need to edit the ZipFile class to handle it properly. I didn't figure out how to do that cleanly. It does binary files now, too.
EDIT - I updated the JS unzip library and demo. It now does binary files, in addition to text. I've made it more resilient and more general - you can now specify the encoding to use when reading text files. Also the demo is expanded - it shows unzipping an XLSX file in the browser, among other things.
So, while I think it is of limited utility and interest, it works. I guess it would work in Node.js.
I'm using zip.js and it seems to be quite useful. It's worth a look!
Check the Unzip demo, for example.
I found jszip quite useful. I've used so far only for reading, but they have create/edit capabilities as well.
Code wise it looks something like this
var new_zip = new JSZip();
new_zip.load(file);
new_zip.files["doc.xml"].asText() // this give you the text in the file
One thing I noticed is that it seems the file has to be in binary stream format (read using the .readAsArrayBuffer of FileReader(), otherwise I was getting errors saying I might have a corrupt zip file
Edit: Note from the 2.x to 3.0.0 upgrade guide:
The load() method and the constructor with data (new JSZip(data)) have
been replaced by loadAsync().
Thanks user2677034
If you need to support other formats as well or just need good performance, you can use this WebAssembly library
it's promised based, it uses WebWorkers for threading and API is actually simple ES module
How to use
Install with npm i libarchive.js and use it as a ES module.
The library consists of two parts: ES module and webworker bundle, ES module part is your interface to talk to library, use it like any other module. The webworker bundle lives in the libarchive.js/dist folder so you need to make sure that it is available in your public folder since it will not get bundled if you're using bundler (it's all bundled up already) and specify correct path to Archive.init() method.
import {Archive} from 'libarchive.js/main.js';
Archive.init({
workerUrl: 'libarchive.js/dist/worker-bundle.js'
});
document.getElementById('file').addEventListener('change', async (e) => {
const file = e.currentTarget.files[0];
const archive = await Archive.open(file);
let obj = await archive.extractFiles();
console.log(obj);
});
// outputs
{
".gitignore": {File},
"addon": {
"addon.py": {File},
"addon.xml": {File}
},
"README.md": {File}
}
I wrote "Binary Tools for JavaScript", an open source project that includes the ability to unzip, unrar and untar: https://github.com/codedread/bitjs
Used in my comic book reader: https://github.com/codedread/kthoom (also open source).
HTH!
If anyone's reading images or other binary files from a zip file hosted at a remote server, you can use following snippet to download and create zip object using the jszip library.
// this function just get the public url of zip file.
let url = await getStorageUrl(path)
console.log('public url is', url)
//get the zip file to client
axios.get(url, { responseType: 'arraybuffer' }).then((res) => {
console.log('zip download status ', res.status)
//load contents into jszip and create an object
jszip.loadAsync(new Blob([res.data], { type: 'application/zip' })).then((zip) => {
const zipObj = zip
$.each(zip.files, function (index, zipEntry) {
console.log('filename', zipEntry.name)
})
})
Now using the zipObj you can access the files and create a src url for it.
var fname = 'myImage.jpg'
zipObj.file(fname).async('blob').then((blob) => {
var blobUrl = URL.createObjectURL(blob)

Categories

Resources