Nodejs child_process execute shell command - javascript

I am working on a university project where I have to evaluate the security threats to an open WiFi Network.I have chosen the aircrack-ng set of tools for penetration testing. My project uses Node js for the rich set of features. However, I am a beginner and am struggling to solve a problem. Firstly, I shall present my code and then pose the problem.
var spawn = require('child_process').spawn;
var nic = "wlan2";
//obtain uid number of a user for spawing a new console command
//var uidNumber = require("uid-number");
// uidNumber("su", function (er, uid, gid) {
// console.log(uid);
// });
//Check for monitor tools
var airmon_ng= spawn('airmon-ng');
airmon_ng.stdout.on('data', function (data) {
nicList = data.toString().split("\n");
//use for data binding
console.log(nicList[0]);//.split("\t")[0]);
});
//airmon start at the nic(var)
var airmon_ng_start = spawn('airmon-ng',['start',nic]).on('error',function(err){console.log(err);});
airmon_ng_start.stdout.on('data', function (data) {
console.log(data.toString());
});
var airmon_ng_start = spawn('airodump-ng',['mon0']).on('error',function(err){console.log(err);});
airmon_ng_start.stdout.on('data', function (data) {
console.log(data.toString());
});
As seen in the above code. I use the child_process.spwan to execute the shell command. In the line "var airmon_ng_start = spawn(......" the actual command executes in the terminal and doesn`t end till the ctrl+c is hit and it regularly updates the list of Wi-Fi networks available in the vicinity . My goal is to identify the network that I wish to test for vulnerability. However when I execute the command the process goes to an infinite loop and waits for the shell command to terminate (which never terminates until killed) moreover I wish to use the stdout stream to display the new set of data as the Wi-Fi finds and updates. May the node.js experts provide me with a better way to do this ?
2) Also I with to execute some commands as root . how may this be done . For now I am running the javascript as a root. However, in the project I wish to execute only some of the commands as root and not the entire js file as root. Any suggestions ?

//inherit parent`s stdout stream
var airmon_ng_start = spawn('airodump-ng',['mon0'],{ stdio: 'inherit' })
.on('error',function(err){console.log(err);});
Found this solution. Simply inherit parent`s stdout

Related

stream stdout causes RAM usage to increase dramatically

The bounty expires in 1 hour. Answers to this question are eligible for a +200 reputation bounty.
Soroush Bgm is looking for a canonical answer.
I use spawn to run a command that runs constantly (Not supposed to stop) and it transmits data to its output. The problem is that RAM usage of the node app increases constantly.
After multiple tests, I could reach to following part of code that reproduces the problem, even though the functions are almost empty:
const runCommand = () => {
const command = 'FFMPEG COMMAND HERE';
let ffmpeg = spawn(command, [], { shell: true });
ffmpeg.on('exit', function(code) { code = null; });
ffmpeg.stderr.on('data', function (data) { data = null; });
ffmpeg.stdout.on('data', function (data) { data = null; });
};
I get the same problem with following:
const runCommand = () => {
const command = 'FFMPEG COMMAND HERE';
let ffmpeg = spawn(command, [], { shell: true });
ffmpeg.on('exit', function(code) { code = null; });
ffmpeg.stderr.on('data', function (data) { data = null; });
ffmpeg.on('spawn', function () {
ffmpeg.stdout.pipe(fs.createWriteStream('/dev/null'));
});
};
The important part is, when I delete function (data) {} from ffmpeg.stdout.on('data', function (data) {}); the problem goes away. Type of received data is buffer object. I think the problem is with that part.
The problem also appears when spawn pipes out the data to another writable (even to /dev/null).
UPDATE: After hours of research, I found out that it's something related to spawn output and stream backpressure. I configured FFMPEG command to send chunks less frequently. That mitigated the problem (Increasing less than before). But memory usage still increasing.
If you delete the ffmpeg.stdout.on('data', function (data) {}); line the problem fades away, but just partially because ffmpeg keeps on writing in the stdout and may eventually stop, waiting for the stdout to be consumed. For example, MongoDB has this "pause until stdout is empty" logic.
If you are not going to process the stdout, just ignore it with this:
const runCommand = () => {
const command = 'FFMPEG COMMAND HERE';
let ffmpeg = spawn(command, [], { shell: true, stdio: "ignore" });
ffmpeg.on('exit', function(code) { code = null; });
};
This will make the spawned process to dump the stdout and stderr so there's no need to be consumed. Is the correct way, as you don't need to waste CPU cycles and resources reading a buffer that you are going to discard. Take into account that although you just add a one liner to read and discard the data, livuv (the nodejs IO manager, among other things) does more complex things to read this data.
Still, I'm pretty sure that you are facing this bug: https://github.com/Unitech/pm2/issues/5145
It also seems that if you output too much logs, pm2 can't handle writing them to the output files as fast as needed, so reducing the log output can fix the problem: https://github.com/Unitech/pm2/issues/1126#issuecomment-996626921
As you mentioned you need the stdout output stdio: "ignore" is not an option.
Depending on what you're doing with the data you're receiving you may receive more data than you can handle. Therefore buffers build up filling your memory.
A possible solution will be to pause and resume the stream when data builds up too much.
ffmpeg.stdout.on('data', function (data) {
ffmpeg.stdout.pause();
doSomethingWithDataAsyncWhichTakesAWhile(data).finally(() => ffmpeg.stdout.resume());
});
When to pause and resume the stream highly depend on the factor how you handle the data.
Using in combination with a writeable (which when I'm not mistaken your're doing):
ffmpeg.stdout.on('data', function (data) {
if(!writeable .write(data)) {
/* We need to wait for the 'drain' event. */
ffmpeg.stdout.pause();
writeable .once('drain', () => ffmpeg.stdout.resume());
}
});
writeable.write(...) returns false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true. source.
If you're ignoring this you'll end up building up buffers in memory.
This might be the cause of your problem.
PS: As side note:
At least on unix systems when the output buffer of stdout has been filled and not been read (e.g. by pausing the stream) the application which writes to stdout will hang until there is space to write into. In case of ffmpeg this is not an issue and intended behaviour. But it's just something to be mindful of.

How to efficiently stream a real-time chart from a local data file

complete noob picking up NodeJS over the last few days here, and I've gotten myself in big trouble, it looks like. I've currently got a working Node JS+Express server instance, running on a Raspberry Pi, acting as a web interface for a local data acquisition script ("the DAQ"). When executed, the script writes out data to a local file on the Pi, in .csv format, writing out in real-time every second.
My Node app is a simple web interface to start (on-click) the data acquisition script, as well as to plot previously acquired data logs, and visualize the actively being collected data in real time. Plotting of old logs was simple, and I wrote a JS function (using Plotly + d3) to read a local csv file via AJAX call, and plot it - using this script as a starting point, but using the logs served by express rather than an external file.
When I went to translate this into a real-time plot, I started out using the setInterval() method to update the graph periodically, based on other examples. After dealing with a few unwanted recursion issues, and adjusting the interval to a more reasonable setting, I eliminated the memory/traffic issues which were crashing the browser after a minute or two, and things are mostly stable.
However, I need help with one thing primarily:
Improving the efficiency of my first attempt approach: This acquisition script absolutely needs to be written to file every second, but considering that a typical run might last 1-2 weeks, the file size being requested on every Interval loop will quickly start to balloon. I'm completely new to Node/Express, so I'm sure there's a much better way of doing the real-time rendering aspect of this - that's the real issue here. Any pointers of a better way to go about doing this would be massively helpful!
Right now, the killDAQ() call issued by the "Stop" button kills the underlying python process writing out the data to disk. Is there a way to hook into using that same button click to also terminate the setInterval() loop updating the graph? There's no need for it to be updated any longer after the data acquisition has been stopped so having the single click do double duty would be ideal. I think that setting up a listener or res/req approach would be an option, but pointers in the right direction would be massively helpful.
(Edit: I solved #2, using global window. variables. It's a hack, but it seems to work:
window.refreshIntervalId = setInterval(foo);
...
clearInterval(window.refreshIntervalId);
)
Thanks for much for the help!
MWE:
html (using Pug as a template engine):
doctype html
html
body.default
.container-fluid
.row
.col-md-5
.row.text-center
.col-md-6
button#start_button(type="button", onclick="makeCallToDAQ()") Start Acquisition
.col-md-6
button#stop_button(type="button", onclick="killDAQ()") Stop Acquisition
.col-md-7
#myDAQDiv(style='width: 980px; height: 500px;')
javascript (start/stop acquisition):
function makeCallToDAQ() {
fetch('/start_daq', {
// call to app to start the acquisition script
})
.then(console.log(dateTime))
.then(function(response) {
console.log(response)
setInterval(function(){ callPlotly(dateTime.concat('.csv')); }, 5000);
});
}
function killDAQ() {
fetch('/stop_daq')
// kills the process
.then(function(response) {
// Use the response sent here
alert('DAQ has stopped!')
})
}
javascript (call to Plotly for plotting):
function callPlotly(filename) {
var csv_filename = filename;
console.log(csv_filename)
function makeplot(csv_filename) {
// Read data via AJAX call and grab header names
var headerNames = [];
d3.csv(csv_filename, function(error, data) {
headerNames = d3.keys(data[0]);
processData(data, headerNames)
});
};
function processData(allRows, headerNames) {
// Plot data from relevant columns
var plotDiv = document.getElementById("plot");
var traces = [{
x: x,
y: y
}];
Plotly.newPlot('myDAQDiv', traces, plotting_options);
};
makeplot(filename);
}
node.js (the actual Node app):
// Start the DAQ
app.use(express.json());
var isDaqRunning = true;
var pythonPID = 0;
const { spawn } = require('child_process')
var process;
app.post('/start_daq', function(req, res) {
isDaqRunning = true;
// Call the python script here.
const process = spawn('python', ['../private/BIC_script.py', arg1, arg2])
pythonPID = process.pid;
process.stdout.on('data', (myData) => {
res.send("Done!")
})
process.stderr.on('data', (myErr) => {
// If anything gets written to stderr, it'll be in the myErr variable
})
res.status(200).send(); //.json(result);
})
// Stop the DAQ
app.get('/stop_daq', function(req, res) {
isDaqRunning = false;
process.on('close', (code, signal) => {
console.log(
`child process terminated due to receipt of signal ${signal}`);
});
// Send SIGTERM to process
process.kill('SIGTERM');
res.status(200).send();
})

NodeJS child_process stdout, if process is waiting for stdin

I'm working on an application, which allows to compile and execute code given over an api.
The binary I want to execute is saved as input_c and should print a text, asking the user for his name and print out another text after the input is received.
It correctly works using the code below: first text - input (on terminal) - second text.
const {spawn} = require('child_process');
let cmd = spawn('input_c', [], {stdio: [process.stdin, process.stdout, process.stderr]});
Output:
$ node test.js
Hello, what is your name? Heinz
Hi Heinz, nice to meet you!
I like to handle stdout, stderr and stdin seperately and not write it to the terminal. The following code was my attempt to achieve the same behaviour as above:
const {spawn} = require('child_process');
let cmd = spawn('input_c');
cmd.stdout.on('data', data => {
console.log(data.toString());
});
cmd.stderr.on('data', data => {
console.log(data.toString());
});
cmd.on('error', data => {
console.log(data.toString());
});
// simulating user input
setTimeout(function() {
console.log('Heinz');
cmd.stdin.write('Heinz\n');
}, 3000);
Output:
$ node test.js
Heinz
Hello, what is your name? Hi Heinz, nice to meet you!
To simulate user input I'm writing to stdin after 3000ms. But here I'm not receiving the first data in stdout directly on run, it seems to wait for stdin and outputs everything at once.
How can I achieve the same behaviour for my second case?
The following C-Code was used to compile the binary, but any application waiting for user input can be used for this:
#include <stdio.h>
int main() {
char name[32];
printf("Hello, what is your name? ");
scanf("%s", name);
printf("Hi %s, nice to meet you!", name);
return 0;
}
node-pty can be used here to prevent buffered output of child process.
const pty = require('node-pty');
let cmd = pty.spawn('./input_c');
cmd.on('data', data => {
console.log(data.toString());
});
// simulating user input
setTimeout(function() {
console.log('Heinz');
cmd.write('Heinz\n');
}, 3000);
output:
Hello, what is your name?
Heinz
Heinz
Hi Heinz, nice to meet you!
The problem you're facing with stdout.on() events not being triggered after spawn() appears because of how node.js spawns the child process. Essentially, it creates stream.pipe() (by default), which allows child process to buffer the output before sending it to stdout it was given by node, which is, in general, good for performance.
But, since you want real-time output and also you're in charge of the binary, you might simply disable internal buffering. In C you can achieve that by adding setbuf(stdout, NULL); to the beginning of your program:
#include <stdio.h>
int main() {
setbuf(stdout, NULL);
char name[32];
printf("Hello, what is your name? ");
scanf("%31s", name);
printf("Hi %s, nice to meet you!", name);
return 0;
}
Alternatively, you can call fflush(stdout); after each printf(), puts(), etc:
#include <stdio.h>
int main() {
char name[32];
printf("Hello, what is your name? "); fflush(stdout);
scanf("%31s", name);
printf("Hi %s, nice to meet you!", name); fflush(stdout);
return 0;
}
Upon disabling internal buffering or triggering explicit flushes in the child process, you will immediately get the behavior you expect, without any external dependencies.
UPD:
Many applications intentionally suppress or, at least, allow suppressing stdio buffering, so you may find related startup arguments. For example, you can launch python interpreter binary with -u option, which will force stdin, stdout and stderr to be totally unbuffered. There are also several older questions related nodejs and stdio buffering problems, you might find useful, like this one: How can I flush a child process from nodejs

Re-using same instance again webdriverJS

I am really new to Selenium. I managed to open a website using the below nodejs code
var webdriver = require('selenium-webdriver');
var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
console.log(driver);
driver.get('https://web.whatsapp.com');
//perform all other operations here.
https://web.whatsapp.com is opened and I manually scan a QR code and log in. Now I have different javascript files to perform actions like delete, clear chat inside web.whatsapp.com etc...
Now If I get some error, I debug and when I run the script again using node test.js, it takes another 2 minutes to load page and do the steps I needed. I just wanted to reopen the already opened tab and continue my script instead new window opens.
Edit day 2 : Still searching for solution. I tried below code to save object and reuse it.. Is this the correct approach ? I get a JSON parse error though.
var o = new chrome.Options();
o.addArguments("user-data-dir=/Users/vishnu/Library/Application Support/Google/Chrome/Profile 2");
o.addArguments("disable-infobars");
o.addArguments("--no-first-run");
var driver = new webdriver.Builder().withCapabilities(webdriver.Capabilities.chrome()).setChromeOptions(o).build();
var savefile = fs.writeFile('data.json', JSON.stringify(util.inspect(driver)) , 'utf-8');
var parsedJSON = require('./data.json');
console.log(parsedJSON);
It took me some time and a couple of different approaches, but I managed to work up something I think solves your problem and allows to develop tests in a rather nice way.
Because it does not directly answer the question of how to re-use a browser session in Selenium (using their JavaScript API), I will first present my proposed solution and then briefly discuss the other approaches I tried. It may give someone else an idea and help them to solve this problem in a nicer/better way. Who knows. At least my attempts will be documented.
Proposed solution (tested and works)
Because I did not manage to actually reuse a browser session (see below), I figured I could try something else. The approach will be the following.
Idea
Have a main loop in one file (say init.js) and tests in a separate file (test.js).
The main loop opens a browser instance and keeps it open. It also exposes some sort of CLI that allows one to run tests (from test.js), inspect errors as they occur and to close the browser instance and stop the main loop.
The test in test.js exports a test function that is being executed by the main loop. It is passed a driver instance to work with. Any errors that occur here are being caught by the main loop.
Because the browser instance is opened only once, we have to do the manual process of authenticating with WhatsApp (scanning a QR code) only once. After that, running a test will reload web.whatsapp.com, but it will have remembered that we authenticated and thus immediately be able to run whatever tests we define in test.js.
In order to keep the main loop alive, it is vital that we catch each and every error that might occur in our tests. I unfortunately had to resort to uncaughtException for that.
Implementation
This is the implementation of the above idea I came up with. It is possible to make this much fancier if you would want to do so. I went for simplicity here (hope I managed).
init.js
This is the main loop from the above idea.
var webdriver = require('selenium-webdriver'),
by = webdriver.By,
until = webdriver.until,
driver = null,
prompt = '> ',
testPath = 'test.js',
lastError = null;
function initDriver() {
return new Promise((resolve, reject) => {
// already opened a browser? done
if (driver !== null) {
resolve();
return;
}
// open a new browser, let user scan QR code
driver = new webdriver.Builder().forBrowser('chrome').build();
driver.get('https://web.whatsapp.com');
process.stdout.write("Please scan the QR code within 30 seconds...\n");
driver.wait(until.elementLocated(by.className('chat')), 30000)
.then(() => resolve())
.catch((timeout) => {
process.stdout.write("\b\bTimed out waiting for code to" +
" be scanned.\n");
driver.quit();
reject();
});
});
}
function recordError(err) {
process.stderr.write(err.name + ': ' + err.message + "\n");
lastError = err;
// let user know that test failed
process.stdout.write("Test failed!\n");
// indicate we are ready to read the next command
process.stdout.write(prompt);
}
process.stdout.write(prompt);
process.stdin.setEncoding('utf8');
process.stdin.on('readable', () => {
var chunk = process.stdin.read();
if (chunk === null) {
// happens on initialization, ignore
return;
}
// do various different things for different commands
var line = chunk.trim(),
cmds = line.split(/\s+/);
switch (cmds[0]) {
case 'error':
// print last error, when applicable
if (lastError !== null) {
console.log(lastError);
}
// indicate we are ready to read the next command
process.stdout.write(prompt);
break;
case 'run':
// open a browser if we didn't yet, execute tests
initDriver().then(() => {
// carefully load test code, report SyntaxError when applicable
var file = (cmds.length === 1 ? testPath : cmds[1] + '.js');
try {
var test = require('./' + file);
} catch (err) {
recordError(err);
return;
} finally {
// force node to read the test code again when we
// require it in the future
delete require.cache[__dirname + '/' + file];
}
// carefully execute tests, report errors when applicable
test.execute(driver, by, until)
.then(() => {
// indicate we are ready to read the next command
process.stdout.write(prompt);
})
.catch(recordError);
}).catch(() => process.stdin.destroy());
break;
case 'quit':
// close browser if it was opened and stop this process
if (driver !== null) {
driver.quit();
}
process.stdin.destroy();
return;
}
});
// some errors somehow still escape all catches we have...
process.on('uncaughtException', recordError);
test.js
This is the test from the above idea. I wrote some things just to test the main loop and some WebDriver functionality. Pretty much anything is possible here. I have used promises to make test execution work nicely with the main loop.
var driver, by, until,
timeout = 5000;
function waitAndClickElement(selector, index = 0) {
driver.wait(until.elementLocated(by.css(selector)), timeout)
.then(() => {
driver.findElements(by.css(selector)).then((els) => {
var element = els[index];
driver.wait(until.elementIsVisible(element), timeout);
element.click();
});
});
}
exports.execute = function(d, b, u) {
// make globally accessible for ease of use
driver = d;
by = b;
until = u;
// actual test as a promise
return new Promise((resolve, reject) => {
// open site
driver.get('https://web.whatsapp.com');
// make sure it loads fine
driver.wait(until.elementLocated(by.className('chat')), timeout);
driver.wait(until.elementIsVisible(
driver.findElement(by.className('chat'))), timeout);
// open menu
waitAndClickElement('.icon.icon-menu');
// click profile link
waitAndClickElement('.menu-shortcut', 1);
// give profile time to animate
// this prevents an error from occurring when we try to click the close
// button while it is still being animated (workaround/hack!)
driver.sleep(500);
// close profile
waitAndClickElement('.btn-close-drawer');
driver.sleep(500); // same for hiding profile
// click some chat
waitAndClickElement('.chat', 3);
// let main script know we are done successfully
// we do so after all other webdriver promise have resolved by creating
// another webdriver promise and hooking into its resolve
driver.wait(until.elementLocated(by.className('chat')), timeout)
.then(() => resolve());
});
};
Example output
Here is some example output. The first invocation of run test will open up an instance of Chrome. Other invocations will use that same instance. When an error occurs, it can be inspected as shown. Executing quit will close the browser instance and quit the main loop.
$ node init.js
> run test
> run test
WebDriverError: unknown error: Element <div class="chat">...</div> is not clickable at point (163, 432). Other element would receive the click: <div dir="auto" contenteditable="false" class="input input-text">...</div>
(Session info: chrome=57.0.2987.133)
(Driver info: chromedriver=2.29.461571 (8a88bbe0775e2a23afda0ceaf2ef7ee74e822cc5),platform=Linux 4.9.0-2-amd64 x86_64)
Test failed!
> error
<prints complete stacktrace>
> run test
> quit
You can run tests in other files by simply calling them. Say you have a file test-foo.js, then execute run test-foo in the above prompt to run it. All tests will share the same Chrome instance.
Failed attempt #1: saving and restoring storage
When inspecting the page using my development tools, I noticed that it appears to use the localStorage. It is possible to export this as JSON and write it to a file. On a next invocation, this file can be read, parsed and written to the new browser instance storage before reloading the page.
Unfortunately, WhatsApp still required me to scan the QR code. I have tried to figure out what I missed (cookies, sessionStorage, ...), but did not manage. It is possible that WhatsApp registers the browser as being disconnected after some time has passed. Or that it uses other browser properties (session ID?) to recognize the browser. This is pure speculating from my side though.
Failed attempt #2: switching session/window
Every browser instance started via WebDriver has a session ID. This ID can be retrieved, so I figured it may be possible to start a session and then connect to it from the test cases, which would then be run from a separate file (you can see this is the predecessor of the final solution). Unfortunately, I have not been able to figure out a way to set the session ID. This may actually be a security concern, I am not sure. People more expert in the usage of WebDriver might be able to clarify here.
I did find out that it is possible to retrieve a list of window handles and switch between them. Unfortunately, windows are only shared within a single session and not across sessions.

nodejs for linux server programming / as scripting language

I am writing a script for provisioning new users for my application.
Script will be written in node, as one of its tasks will be connecting to mysql to create new users in application's database.
I tried to use spawn-sync library (that surprisingly seems to be also async) to execute bash commands but every single one of them I need to do the following:
var spawnSync = require('spawn-sync');
var user_name = process.argv[2];
new Promise((resolve)=>{
var result = spawnSync('useradd',[user_name]);
if (result.status !== 0) {
process.stderr.write(result.stderr);
process.exit(result.status);
} else {
process.stdout.write(result.stdout);
process.stderr.write(result.stderr);
}
resolve()
}).then(new Promise(function(resolve){
// execute another part of script
resolve()
})
Is there a better way of doing this? Whenever I try to look something up, all tutorials on the web seem to be talking only about express when it comes to the nodejs context.
Or perhaps you discourage using nodejs to be used as a scripting serverside laguage?
If you want to interact with processes synchronously, Node.js has that functionality built in via child_process.execSync(). Note that if the child process has a non-zero exit code it will throw (so you'll need to wrap it with a try/catch).
try {
var cmd = ['useradd', user_name].join(' ');
var stdout = require('child_process').execSync(cmd);
console.log(stdout);
}
catch (e) {
console.log(e);
}

Categories

Resources