Testing a node command line app with jasmine-node - javascript

How exactly would I go about testing a node-based CLI with Jasmine/jasmine-node? I have tested node modules in the past with Jasmine, which was easy, as I would merely require in the module in the spec file, initialise it and test it, but obviously that's different with a CLI. Obviously one method would be to convert it into a class and test it on its own, then convert it into a CLI, but that's not the real deal. Has anyone successfully tested a node CLI with Jasmine?

Like any good unix citizen, a node CLI app should support stdout redirection. If it does so, it then becomes pretty trivial to test it, by storing the buffer and running assertions on it when it ends. This testing strategy uses tape, but the general concepts of spawning a process and running assertions on its stdout should be transferable to another test framework.
var test = require('tape');
var spawn = require('child_process').spawn;
var path = require('path');
var read = require('fs').readFileSync;
test('binary', function (t) {
t.plan(3);
process.chdir(__dirname);
var ps = spawn(process.execPath, [
path.resolve(__dirname, '../bin/cmd.js'),
'fixture.txt'
]);
var out = '';
var err = '';
ps.stdout.on('data', function (buffer) { out += buffer; });
ps.stderr.on('data', function (buffer) { err += buffer; });
ps.on('exit', function (code) {
var expected = read('expected.txt', 'utf-8');
t.notOk(err, 'should not error');
t.equal(code, 0, 'should exit with code 0');
t.equal(out, expected, 'should perform the transform');
});
});
This CLI app takes a text file as its first parameter and runs it through a transform. All you have to do is provide a fixture and expected result and then run the test.

To run all tests in certain folder from CLI
On windows, from command prompt:
"<PATH TO NODE IF NOT IN ENV>\node.exe" "<PATH TO JASMINE-NODE MODULE>\jasmine-node\lib\jasmine-node\cli.js --verbose --test-dir <PATH TO SPECS>
Example:
"C:\Program Files (x86)\nodejs\node.exe" "c:\MyProject\node_modules\jasmine-node\lib\jasmine-node\cli.js" --verbose --test-dir c:\MyProject\Specs
Check docs on git:
https://github.com/mhevery/jasmine-node
https://github.com/mhevery/jasmine-node/wiki/Command-Line-Usage

Related

ShellJs execute CLI command

I'm using codeceptjs with shelljs.
In one of tests I'm invoking go application like this :
const shell = require('shelljs')
shell.exec('./myGoApplication')
When application is started and correctly working I have a CLI that is listening for input from console via keyboard so I can type text and my application is getting it.
But when I execute this command seems its not transferring to the input and application is not invoking commands.
shell.exec('my text')
Does someone know how to make shellJs make command to my CLI waiting for console input?
go cli:
func StartCLI(relay RelayConn) {
go func() {
fmt.Println("[?] To see avaliable commands write /?")
reader := bufio.NewReader(os.Stdin)
for {
text, _ := reader.ReadString('\n')
text = strings.Trim(text, "\n")
switch true {
case strings.HasPrefix(text, "/?"):
fmt.Println(helpText)
default:
relay.Chat("Default Member Simulation chat message")
}
}
}()
}
https://github.com/shelljs/shelljs
As of writing this answer, it is not possible for processes launched via exec to accept input. See issue #424 for details.
From the ShellJS FAQ:
We don't currently support running commands in exec which require interactive input. The correct workaround is to use the native child_process module:
child_process.execFileSync(commandName, [arg1, arg2, ...], {stdio: 'inherit'});
Using npm init as an example:
child_process.execFileSync('npm', ['init'], {stdio: 'inherit'});
Your code should therefore be something like this:
const child_process = require('child_process')
child_process.execFileSync('./myGoApplication', {stdio: 'inherit'})
Setting the stdio option to 'inherit' means that stdin, stdout, and stderr of the child process is sent to the parent processes' (process.stdin, process.stdout, and process.stderr), so you should be able to type in input to your Node.js program to send it to your Go program.
See the documentation on child_process.execFileSync for more deatils.

Getting test results when creating a Karma server programmatically

I am trying to run multiple Karma test files in parallel from inside a Node script and get to know which tests are passing or failing. Right now what I have is this:
const exec = require("child_process").exec;
exec("karma start " + filename, (error, stdout, stderr) => {
// handle errors and test results...
});
The code above works well, and I can get the information on tests passed or failed from stdout. However, it requires having installed Karma and all of the associated dependencies (reporters, browser launchers, etc.) globally. I am looking for a solution that doesn't require me to install all dependencies globally.
My first thought was this:
const karma = require("karma");
const server = new karma.Server(config, () => {
// some logic
});
However, when trying this other approach, I have been unable to gather the test results programmatically.
When using new karma.Server(), is there any way in which I could know which tests have passed or failed (and, ideally, a stack trace of the error)? Alternatively, is there any other way in which I can execute my tests and get the desired information programmatically without the need to install dependencies globally?
Actually, changing the exec line to this seems to do the trick:
exec("node node_modules/karma/bin/karma start " + filename, (error, stdout, stderr) => {
It turns out I'd only need to run the locally installed version of Karma instead of the global one. :-)

Mocha not recognizing CoffeeScript/JS test when adding programatically

I am trying to add mocha to an existing project. I have the following test just for putting things together...
assert = require('assert');
describe 'Array', ->
describe '#indexOf()', ->
it 'should return -1 when the value is not present', ->
assert.equal(-1, [1,2,3].indexOf(4));
Options...
--compilers coffee:coffee-script/register
Then I run mocha --opts ./mocha.opts src/test/coffee/test/test.coffee and I see
1 passing (6ms)
Now I try to create a runner file to handle
globFlat = require('glob-flat');
Mocha = require('mocha');
mocha = new Mocha();
files = globFlat.sync([
'src/test/coffee/test/test.coffee'
]);
mocha.addFile file for file in files
mocha.run();
And run mocha --opts ./mocha.opts src/test/mocha/mocha-runner.coffee I get
0 passing (0ms)
So why is it not finding the test?
Update
I have also converted everything over to JS to ensure it wasn't an issue with CS and I am getting the same thing...
require('coffee-script');
var globFlat = require('glob-flat');
var Mocha = require('mocha');
var mocha = new Mocha();
mocha.addFile('src/test/coffee/test/test.js');
runner = mocha.run();
console.log("Done");
It runs like this...
mocha src/test/mocha/mocha-runner.js
Done
0 passing (0ms)
Update 2
Ok so it appears I should be using node and not mocha for running it. This presents a problem as the .js version works but the .coffee version throws an error...
(function (exports, require, module, __filename, __dirname) { require 'coffee-script';
^^^^^^^^^^^^^^^
SyntaxError: Unexpected string
Because node cannot recognize the coffeescript
I am going to post this up here as an answer, although I don't like it...
mocha-wrapper.js
require('coffee-script/register');
require('./mocha-runner.coffee');
mocha-runner.coffee
globFlat = require 'glob-flat';
Mocha = require 'mocha';
mocha = new Mocha();
files = globFlat.sync([
'src/test/coffee/test/test.coffee'
]);
mocha.addFile file for file in files
mocha.run();
I would like to avoid the multiple files but for now it looks like I am stuck :-/. That is why I am going to wait to accept my answer.

custom targets / running arbitrary code

In make it's possible to define custom targets that have no relevance to the actual code that they act upon, in the sense that they are language agnostic.
release_sortof:
#echo packaging release...
tar czf release.tar.gz file1 file2 file3
ls /dev/null
ls /dev/stderr
ls /dev/stdout
I know the example above is horrible, but the point I'm trying to illustrate is that the code in the release_sortof target doesn't depend on the fact that my project uses code written in C, for example; nor does it depend on me using Make built-ins such as foreach.
Is there a way to work with javascript/<INSERT-NAME>script files without using the ever insufficient plugins available for gulp? As in, could I lint my coffeescript with coffeelint by directly calling the coffeelint module:
var gulp = require('gulp')
, coffeelint = require('coffeelint')
;
gulp.task('lint', function() {
/* run coffeelint on source files */
});
Or can this only be done using plugins?
Another example would be to run arbitrary code like so:
var spawn = require('child_process').spawn;
gulp.task('blue', function() {
var child = spawn('ls');
/* do stuff with spawned child process */
});
I do this kind of thing for browserify using vinyl-source-stream - basically allowing you to use the library as it is, and not using gulp-* plugins.
var browserify = require('browserify'),
gulp = require('gulp'),
source = require('vinyl-source-stream'),
stringify = require('stringify'),
plumber = require('gulp-plumber'),
config = require('../config').scripts;
gulp.task('browserify', function () {
return browserify(config.app)
.transform(stringify(['.html']))
.bundle()
.pipe(plumber())
.pipe(source('bundle.js'))
.pipe(gulp.dest(config.dest));
});
Heres the npm - https://www.npmjs.com/package/vinyl-source-stream
Use conventional text streams at the start of your gulp or vinyl
pipelines, making for nicer interoperability with the existing npm
stream ecosystem.
Maybe that will help you?

How to run node js application programmatically using forever

I need to run the set of node js files which contains the configuration information where It has to run typically port number and IP address then using the forever in node.js I need to run the script in the terminal with the configuration without having any manual input.
For Programmatic approach , you can use Forever-Moniter
var forever = require('forever-monitor');
var child = new (forever.Monitor)('your-filename.js', {
max: 3,
silent: true,
options: []
});
child.on('exit', function () {
console.log('your-filename.js has exited after 3 restarts');
});
child.start();
You could make use of the child_process module. Check the doc, there're some useful information there: http://nodejs.org/api/child_process.html
To give a brief example
var exec = require('child_process').exec;
exec('forever', function callback(error, stdout, stderr){
// cb
});
If you don't need a callback / don't want to wait for the execution:
var exec = require('child_process').exec('forever').unref();
Was that helpful?
Best
Marc
Edit: Ok, not sure if I really got the point of your question, but my answer combined with https://stackoverflow.com/a/23871739/823851 might offer a good solution.
Usage:
forever start hello.js to start a process.
forever list to see list of all processes started by forever
forever stop hello.js to stop the process, or forever stop 0 to stop the process with index 0 (as shown by forever list).
node-config is a good module for managing different configurations of a Node.js app.
For example, to make a "production" config, /config/production.json would contain:
{
"port" : "3000",
"ip" : "192.168.1.1"
}
In one of your node application JS files:
config = require('config')
....
var port = config.port;
var ip = config.ip;
And to launch the app in this configuration, just first set your NODE_ENV to production from the shell before running your app.
export NODE_ENV=production
forever start app.js
Make additional config JSON files as needed for each of your environments. default.json is used when no environment is specified.

Categories

Resources