I am writing a script for provisioning new users for my application.
Script will be written in node, as one of its tasks will be connecting to mysql to create new users in application's database.
I tried to use spawn-sync library (that surprisingly seems to be also async) to execute bash commands but every single one of them I need to do the following:
var spawnSync = require('spawn-sync');
var user_name = process.argv[2];
new Promise((resolve)=>{
var result = spawnSync('useradd',[user_name]);
if (result.status !== 0) {
process.stderr.write(result.stderr);
process.exit(result.status);
} else {
process.stdout.write(result.stdout);
process.stderr.write(result.stderr);
}
resolve()
}).then(new Promise(function(resolve){
// execute another part of script
resolve()
})
Is there a better way of doing this? Whenever I try to look something up, all tutorials on the web seem to be talking only about express when it comes to the nodejs context.
Or perhaps you discourage using nodejs to be used as a scripting serverside laguage?
If you want to interact with processes synchronously, Node.js has that functionality built in via child_process.execSync(). Note that if the child process has a non-zero exit code it will throw (so you'll need to wrap it with a try/catch).
try {
var cmd = ['useradd', user_name].join(' ');
var stdout = require('child_process').execSync(cmd);
console.log(stdout);
}
catch (e) {
console.log(e);
}
Related
I am really new to Selenium. I managed to open a website using the below nodejs code
var webdriver = require('selenium-webdriver');
var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();
console.log(driver);
driver.get('https://web.whatsapp.com');
//perform all other operations here.
https://web.whatsapp.com is opened and I manually scan a QR code and log in. Now I have different javascript files to perform actions like delete, clear chat inside web.whatsapp.com etc...
Now If I get some error, I debug and when I run the script again using node test.js, it takes another 2 minutes to load page and do the steps I needed. I just wanted to reopen the already opened tab and continue my script instead new window opens.
Edit day 2 : Still searching for solution. I tried below code to save object and reuse it.. Is this the correct approach ? I get a JSON parse error though.
var o = new chrome.Options();
o.addArguments("user-data-dir=/Users/vishnu/Library/Application Support/Google/Chrome/Profile 2");
o.addArguments("disable-infobars");
o.addArguments("--no-first-run");
var driver = new webdriver.Builder().withCapabilities(webdriver.Capabilities.chrome()).setChromeOptions(o).build();
var savefile = fs.writeFile('data.json', JSON.stringify(util.inspect(driver)) , 'utf-8');
var parsedJSON = require('./data.json');
console.log(parsedJSON);
It took me some time and a couple of different approaches, but I managed to work up something I think solves your problem and allows to develop tests in a rather nice way.
Because it does not directly answer the question of how to re-use a browser session in Selenium (using their JavaScript API), I will first present my proposed solution and then briefly discuss the other approaches I tried. It may give someone else an idea and help them to solve this problem in a nicer/better way. Who knows. At least my attempts will be documented.
Proposed solution (tested and works)
Because I did not manage to actually reuse a browser session (see below), I figured I could try something else. The approach will be the following.
Idea
Have a main loop in one file (say init.js) and tests in a separate file (test.js).
The main loop opens a browser instance and keeps it open. It also exposes some sort of CLI that allows one to run tests (from test.js), inspect errors as they occur and to close the browser instance and stop the main loop.
The test in test.js exports a test function that is being executed by the main loop. It is passed a driver instance to work with. Any errors that occur here are being caught by the main loop.
Because the browser instance is opened only once, we have to do the manual process of authenticating with WhatsApp (scanning a QR code) only once. After that, running a test will reload web.whatsapp.com, but it will have remembered that we authenticated and thus immediately be able to run whatever tests we define in test.js.
In order to keep the main loop alive, it is vital that we catch each and every error that might occur in our tests. I unfortunately had to resort to uncaughtException for that.
Implementation
This is the implementation of the above idea I came up with. It is possible to make this much fancier if you would want to do so. I went for simplicity here (hope I managed).
init.js
This is the main loop from the above idea.
var webdriver = require('selenium-webdriver'),
by = webdriver.By,
until = webdriver.until,
driver = null,
prompt = '> ',
testPath = 'test.js',
lastError = null;
function initDriver() {
return new Promise((resolve, reject) => {
// already opened a browser? done
if (driver !== null) {
resolve();
return;
}
// open a new browser, let user scan QR code
driver = new webdriver.Builder().forBrowser('chrome').build();
driver.get('https://web.whatsapp.com');
process.stdout.write("Please scan the QR code within 30 seconds...\n");
driver.wait(until.elementLocated(by.className('chat')), 30000)
.then(() => resolve())
.catch((timeout) => {
process.stdout.write("\b\bTimed out waiting for code to" +
" be scanned.\n");
driver.quit();
reject();
});
});
}
function recordError(err) {
process.stderr.write(err.name + ': ' + err.message + "\n");
lastError = err;
// let user know that test failed
process.stdout.write("Test failed!\n");
// indicate we are ready to read the next command
process.stdout.write(prompt);
}
process.stdout.write(prompt);
process.stdin.setEncoding('utf8');
process.stdin.on('readable', () => {
var chunk = process.stdin.read();
if (chunk === null) {
// happens on initialization, ignore
return;
}
// do various different things for different commands
var line = chunk.trim(),
cmds = line.split(/\s+/);
switch (cmds[0]) {
case 'error':
// print last error, when applicable
if (lastError !== null) {
console.log(lastError);
}
// indicate we are ready to read the next command
process.stdout.write(prompt);
break;
case 'run':
// open a browser if we didn't yet, execute tests
initDriver().then(() => {
// carefully load test code, report SyntaxError when applicable
var file = (cmds.length === 1 ? testPath : cmds[1] + '.js');
try {
var test = require('./' + file);
} catch (err) {
recordError(err);
return;
} finally {
// force node to read the test code again when we
// require it in the future
delete require.cache[__dirname + '/' + file];
}
// carefully execute tests, report errors when applicable
test.execute(driver, by, until)
.then(() => {
// indicate we are ready to read the next command
process.stdout.write(prompt);
})
.catch(recordError);
}).catch(() => process.stdin.destroy());
break;
case 'quit':
// close browser if it was opened and stop this process
if (driver !== null) {
driver.quit();
}
process.stdin.destroy();
return;
}
});
// some errors somehow still escape all catches we have...
process.on('uncaughtException', recordError);
test.js
This is the test from the above idea. I wrote some things just to test the main loop and some WebDriver functionality. Pretty much anything is possible here. I have used promises to make test execution work nicely with the main loop.
var driver, by, until,
timeout = 5000;
function waitAndClickElement(selector, index = 0) {
driver.wait(until.elementLocated(by.css(selector)), timeout)
.then(() => {
driver.findElements(by.css(selector)).then((els) => {
var element = els[index];
driver.wait(until.elementIsVisible(element), timeout);
element.click();
});
});
}
exports.execute = function(d, b, u) {
// make globally accessible for ease of use
driver = d;
by = b;
until = u;
// actual test as a promise
return new Promise((resolve, reject) => {
// open site
driver.get('https://web.whatsapp.com');
// make sure it loads fine
driver.wait(until.elementLocated(by.className('chat')), timeout);
driver.wait(until.elementIsVisible(
driver.findElement(by.className('chat'))), timeout);
// open menu
waitAndClickElement('.icon.icon-menu');
// click profile link
waitAndClickElement('.menu-shortcut', 1);
// give profile time to animate
// this prevents an error from occurring when we try to click the close
// button while it is still being animated (workaround/hack!)
driver.sleep(500);
// close profile
waitAndClickElement('.btn-close-drawer');
driver.sleep(500); // same for hiding profile
// click some chat
waitAndClickElement('.chat', 3);
// let main script know we are done successfully
// we do so after all other webdriver promise have resolved by creating
// another webdriver promise and hooking into its resolve
driver.wait(until.elementLocated(by.className('chat')), timeout)
.then(() => resolve());
});
};
Example output
Here is some example output. The first invocation of run test will open up an instance of Chrome. Other invocations will use that same instance. When an error occurs, it can be inspected as shown. Executing quit will close the browser instance and quit the main loop.
$ node init.js
> run test
> run test
WebDriverError: unknown error: Element <div class="chat">...</div> is not clickable at point (163, 432). Other element would receive the click: <div dir="auto" contenteditable="false" class="input input-text">...</div>
(Session info: chrome=57.0.2987.133)
(Driver info: chromedriver=2.29.461571 (8a88bbe0775e2a23afda0ceaf2ef7ee74e822cc5),platform=Linux 4.9.0-2-amd64 x86_64)
Test failed!
> error
<prints complete stacktrace>
> run test
> quit
You can run tests in other files by simply calling them. Say you have a file test-foo.js, then execute run test-foo in the above prompt to run it. All tests will share the same Chrome instance.
Failed attempt #1: saving and restoring storage
When inspecting the page using my development tools, I noticed that it appears to use the localStorage. It is possible to export this as JSON and write it to a file. On a next invocation, this file can be read, parsed and written to the new browser instance storage before reloading the page.
Unfortunately, WhatsApp still required me to scan the QR code. I have tried to figure out what I missed (cookies, sessionStorage, ...), but did not manage. It is possible that WhatsApp registers the browser as being disconnected after some time has passed. Or that it uses other browser properties (session ID?) to recognize the browser. This is pure speculating from my side though.
Failed attempt #2: switching session/window
Every browser instance started via WebDriver has a session ID. This ID can be retrieved, so I figured it may be possible to start a session and then connect to it from the test cases, which would then be run from a separate file (you can see this is the predecessor of the final solution). Unfortunately, I have not been able to figure out a way to set the session ID. This may actually be a security concern, I am not sure. People more expert in the usage of WebDriver might be able to clarify here.
I did find out that it is possible to retrieve a list of window handles and switch between them. Unfortunately, windows are only shared within a single session and not across sessions.
I have a SPA application that will make multiple reads/writes to IndexedDB.
Opening the DB is an asynchronous operation with a callback:
var db;
var request = window.indexedDB.open("MyDB", 2);
request.onupgradeneeded = function(event) {
// Upgrade to latest version...
}
request.onerror = function(event) {
// Uh oh...
}
request.onsuccess = function(event) {
// DB open, now do something
db = event.target.result;
};
There are two ways I can use this db instance:
Keep a single db instance for the life of the page/SPA?
Call db.close() once the current operation is done and open a new one on the next operation?
Are there pitfalls of either pattern? Does keeping the indexedDB open have any risks/issues? Is there an overhead/delay (past the possible upgrade) to each open action?
I have found that opening a connection per operation does not substantially degrade performance. I have been running a local Chrome extension for over a year now that involves a ton of indexedDB operations and have analyzed its performance hundreds of times and have never witnessed opening a connection as a bottleneck. The bottlenecks come in doing things like not using an index properly or storing large blobs.
Basically, do not base your decision here on performance. It really isn't the issue in terms of connecting.
The issue is really the ergonomics of your code, how much you are fighting against the APIs, and how intuitive your code feels when you look at it, how understable you think the code is, how welcoming is it to fresh eyes (your own a month later, or someone else). This is very notable when dealing with the blocking issue, which is indirectly dealing with application modality.
My personal opinion is that if you are comfortable with writing async Javascript, use whatever method you prefer. If you struggle with async code, choosing to always open the connection will tend to avoid any issues. I would never recommend using a single global page-lifetime variable to someone who is newer to async code. You are also leaving the variable there for the lifetime of the page. On the other hand, if you find async trivial, and find the global db variable more amenable, by all means use it.
Edit - based on your comment I thought I would share some pseudocode of my personal preference:
function connect(name, version) {
return new Promise((resolve, reject) => {
const request = indexedDB.open(name, version);
request.onupgradeneeded = onupgradeneeded;
request.onsuccess = () => resolve(request.result);
request.onerror = () => reject(request.error);
request.onblocked = () => console.warn('pending till unblocked');
});
}
async foo(bar) {
let conn;
try {
conn = await connect(DBNAME, DBVERSION);
await storeBar(conn, bar);
} finally {
if(conn)
conn.close();
}
}
function storeBar(conn, bar) {
return new Promise((resolve, reject) => {
const tx = conn.transaction('store');
const store = tx.objectStore('store');
const request = store.put(bar);
request.onsuccess = () => resolve(request.result);
request.onerror = () => reject(request.error);
});
}
With async/await, there isn't too much friction in having the extra conn = await connect() line in your operational functions.
Opening a connection each time is likely to be slower just because the browser is doing more work (e.g. it may need to read data from disk). Otherwise, no real down sides.
Since you mention upgrades, either pattern requires a different approach to the scenario where the user opens your page in another tab and it tries to open the database with a higher version (because it downloaded newer code form your server). Let's say the old tab was version 3 and the new tab is version 4.
In the one-connection-per-operation case you'll find that your open() on version 3 fails, because the other tab was able to upgrade to version 4. You can notice that the open failed with VersionError e.g. and inform the user that they need to refresh the page.
In the one-connection-per-page case your connection at version 3 will block the other tab. The v4 tab can respond to the "blocked" event on the request and let the user know that older tabs should be closed. Or the v3 tab can respond to the versionupgrade event on the connection and tell the user that it needs to be closed. Or both.
In my current project, we have a HTML page. In HTML page, we have several buttons, for instance we have buttons for Temperature Sensor, Humidity Sensor, Alarm etc. When we click on a button than in back-end it will run corresponding Node.js file, for instance when we click on Temperature sensor button than it will run TemperatureSensor.js file located in the same path. The code for HTML page is as shown below:
The code of TemperatureSensor.js is as below:
var mqtt = require('mqtt');
var client = mqtt.connect('mqtt://test.mosquitto.org:1883');
var NUM_SAMPLE_FOR_AVG = 5;
var numSample = 0;
var tempCelcius = 0;
var currentAvg = 0;
client.subscribe('tempMeasurement');
client.on('message', function(topic, payload) {
if (topic.toString() == "tempMeasurement") {
sensorMeasurement = JSON.parse(payload);
console.log("tempValue is " + sensorMeasurement.tempValue);
if (numSample <= NUM_SAMPLE_FOR_AVG) {
numSample = numSample + 1;
if (sensorMeasurement.unitOfMeasurement == 'F') {
tempCelcius = ((sensorMeasurement.tempValue - 32) * (5 / 9));
} else {
tempCelcius = sensorMeasurement.tempValue;
}
currentAvg = parseFloat(currentAvg) + parseFloat(tempCelcius);
if (numSample == NUM_SAMPLE_FOR_AVG) {
currentAvg = currentAvg / NUM_SAMPLE_FOR_AVG;
var avgTemp = {
"tempValue" : parseFloat(currentAvg),
"unitOfMeasurement" : sensorMeasurement.unitOfMeasurement
};
client.publish('roomAvgTempMeasurement', JSON
.stringify(avgTemp));
console.log("Publishing Data roomAvgTempMeasurement ");
numSample = 0;
currentAvg = 0;
}
}
}
});
The problem is when we clicked on TemperatureSensor button in browser than it display error: TemperatureSensor.js:1 Uncaught ReferenceError: require is not defined. if the content of TemeperatureSensor is console.log("Hello") than it displays Hello in the console of browser. How to provide dependency ??Why we need to do this bcoz if we want to run TemperatureSensor, HumiditySensor etc. than we need to run these in terminal, for instance if we want to run TemperatureSensor than in terminal we have to write sudo node TempeatureSensor.js. This require more manual efforts so in order to reduce this effort we need such kind of HTML page. How to resolve the about problem ??
You can't run Node.js code in the browser, they're completely separate environments (for example, browsers do not have the require function, hence why you're getting that error). Your best bet is to look into creating a REST service of some kind (using Express, Hapi or Restify, most likely) that will allow you to call a Node.js server through HTTP.
This is a decent introduction to the topic - it uses MongoDB for data persistence, but this is in no way a requirement when it comes to making stuff like this. In your case, you'll basically just have to define a route for Temp and Humidity, run your code to get the data in the route handler, and then send JSON data back on the response object. You'll then be able to use jQuery (or any number of other libraries) to make AJAX requests to these routes.
EDIT: After looking at the MQTT GitHub page, there is another option - the library can be used in the browser if bundled using a tool like Browserify or Webpack. Given the complexities of learning to write and maintain REST services, this may well be a better option.
I am working on a university project where I have to evaluate the security threats to an open WiFi Network.I have chosen the aircrack-ng set of tools for penetration testing. My project uses Node js for the rich set of features. However, I am a beginner and am struggling to solve a problem. Firstly, I shall present my code and then pose the problem.
var spawn = require('child_process').spawn;
var nic = "wlan2";
//obtain uid number of a user for spawing a new console command
//var uidNumber = require("uid-number");
// uidNumber("su", function (er, uid, gid) {
// console.log(uid);
// });
//Check for monitor tools
var airmon_ng= spawn('airmon-ng');
airmon_ng.stdout.on('data', function (data) {
nicList = data.toString().split("\n");
//use for data binding
console.log(nicList[0]);//.split("\t")[0]);
});
//airmon start at the nic(var)
var airmon_ng_start = spawn('airmon-ng',['start',nic]).on('error',function(err){console.log(err);});
airmon_ng_start.stdout.on('data', function (data) {
console.log(data.toString());
});
var airmon_ng_start = spawn('airodump-ng',['mon0']).on('error',function(err){console.log(err);});
airmon_ng_start.stdout.on('data', function (data) {
console.log(data.toString());
});
As seen in the above code. I use the child_process.spwan to execute the shell command. In the line "var airmon_ng_start = spawn(......" the actual command executes in the terminal and doesn`t end till the ctrl+c is hit and it regularly updates the list of Wi-Fi networks available in the vicinity . My goal is to identify the network that I wish to test for vulnerability. However when I execute the command the process goes to an infinite loop and waits for the shell command to terminate (which never terminates until killed) moreover I wish to use the stdout stream to display the new set of data as the Wi-Fi finds and updates. May the node.js experts provide me with a better way to do this ?
2) Also I with to execute some commands as root . how may this be done . For now I am running the javascript as a root. However, in the project I wish to execute only some of the commands as root and not the entire js file as root. Any suggestions ?
//inherit parent`s stdout stream
var airmon_ng_start = spawn('airodump-ng',['mon0'],{ stdio: 'inherit' })
.on('error',function(err){console.log(err);});
Found this solution. Simply inherit parent`s stdout
Here is something that's been driving me crazy since an hour now. I'm working on a side project which involves accessing ElasticSearch with Javascript. As a part of the tests, I wanted to create an index. Here is a very simple snippet that, in my mind, should do this, and print the messages returned from the ElasticSearch server:
var es = require('elasticsearch');
var es_client = new es.Client({host: "localhost:9200"});
var breaker = Math.floor((Math.random() * 100) + 1);
var create_promise = es_client.indices.create({index: "test-index-" + breaker});
create_promise.then(function(x) {
console.log(x);
}, function(err) { console.log(err);});
What happens when I go to a directory, run npm install elasticsearch, and then run this code with NodeJS, is that the request is made, but the promise does not return due to some reason. I would expect this code to run to the end, and finish once the response from ES server comes back. Instead, the process just hangs. Any ideas why?
I know that an index can be created just by adding a document to it, but this weird behavior just bugged me, and I couldn't figure out the reason or the sense behind it.
By default the client keeps persistent connections to elasticsearch so that subsequent requests to the same node are much faster. This has the side effect of preventing node from closing normally until client.close() is called. You could either add this to your callback, or disable keepAlive connections by adding keepAlive: false to your client config.