How to launch cmd by using node.js file? - javascript

How to launch cmd by .js file and automatically type some commands in it?
const { exec } = require("child_process");
const open = require("open");
const readline = require("readline").createInterface({
input: process.stdin,
output: process.stdout,
});
readline.question("Launch cmd?", async (name) => {
if (name === "Y") {
await open("cmd.exe", { wait: true }); //how i can write something in cmd?
} else {
process.exit(console.log("You have closed the programm"));
}
});

You can either do something similar as said in this thread.
How to open a command line window in Node.js?
start cmd.exe /K node my-new-script.js parm1 parm2
But if you want to run multiple commands, use a batch script and run that batch script from Nodejs. Batch Scripts are stored in simple text files containing lines with commands that get executed in sequence, one after the other.
require('child_process').exec('cmd /c batfile.bat', function(){
// …you callback code may run here…
});
You can learn more about the batch script here. https://www.tutorialspoint.com/batch_script/batch_script_overview.htm

Related

ShellJs execute CLI command

I'm using codeceptjs with shelljs.
In one of tests I'm invoking go application like this :
const shell = require('shelljs')
shell.exec('./myGoApplication')
When application is started and correctly working I have a CLI that is listening for input from console via keyboard so I can type text and my application is getting it.
But when I execute this command seems its not transferring to the input and application is not invoking commands.
shell.exec('my text')
Does someone know how to make shellJs make command to my CLI waiting for console input?
go cli:
func StartCLI(relay RelayConn) {
go func() {
fmt.Println("[?] To see avaliable commands write /?")
reader := bufio.NewReader(os.Stdin)
for {
text, _ := reader.ReadString('\n')
text = strings.Trim(text, "\n")
switch true {
case strings.HasPrefix(text, "/?"):
fmt.Println(helpText)
default:
relay.Chat("Default Member Simulation chat message")
}
}
}()
}
https://github.com/shelljs/shelljs
As of writing this answer, it is not possible for processes launched via exec to accept input. See issue #424 for details.
From the ShellJS FAQ:
We don't currently support running commands in exec which require interactive input. The correct workaround is to use the native child_process module:
child_process.execFileSync(commandName, [arg1, arg2, ...], {stdio: 'inherit'});
Using npm init as an example:
child_process.execFileSync('npm', ['init'], {stdio: 'inherit'});
Your code should therefore be something like this:
const child_process = require('child_process')
child_process.execFileSync('./myGoApplication', {stdio: 'inherit'})
Setting the stdio option to 'inherit' means that stdin, stdout, and stderr of the child process is sent to the parent processes' (process.stdin, process.stdout, and process.stderr), so you should be able to type in input to your Node.js program to send it to your Go program.
See the documentation on child_process.execFileSync for more deatils.

get output from a javascript file into a javascript file

I want to run a js file (as if it were a frontend) in nodejs.
index.js:
const a = 'dk3';
console.log(a)
logs.js:
const logs = // exporting the file output above
console.log(logs)
the terminal of logs.js: dk3
how to do something like that? the index file cannot have anything to do with the logs, the log has to execute the index and return the output of the index as if it were a variable
edit: I got a part through this code:
index.js:
console.log('foi')
logs.js:
const child_process = require('child_process');
child_process.exec('node src/console.js', (err, stdout, stderr) => {
console.log(`log: ${stdout}`);
})
output of logs.js: foi
but the index runs as nodejs, and I wanted it to be frontend, as if it were running in the html
My assumption is you are asking how to run index.js as a child process of the node process running logs.js, and that you want the code in logs.js to capture the stdout of the index.js child process, and then output the output of the child process to its own stdout.
You can synchronously execute a child process with child_process.spawnSync() (Node v0.11.12+).
You can use this for logs.js:
const child_process = require('child_process');
const child = child_process.spawnSync('node', ['index.js'], { encoding : 'utf8' });
console.log(child.stdout);

How to remove all Electron bloatware on exit?

How can I remove cookies, local storage and other crap from the "AppData\roaming\MyApp" folder, when the electron application quits?
I tried deleting the whole directory on app quit, but it throws me EBUSY errors. Apparently the files are locked or something, almost like someone doesn't want us to be able to remove the bloat?
const fs = require('fs-extra');
const clearBloat = async () => fs.remove(path.join(app.getPath('appData'), app.name));
app.on('window-all-closed', async () => {
await clearBloat();
});
After doing some testing, I've found that you have to delete the files after your electron process has ended (trying to delete the files while in the quit or will-quit app events doesn't delete the files/folders; they get re-created right away. Something in electron (likely, Chromium) wants these files/folders to exist while the app is running, and it's too much work to figure out how to hook into it).
What works for me is spawning a detached cmd off a shell that waits 3 seconds, and then deletes all files/folders in a given application folder. What will be an exercise up to the reader will be to hide the output of the ping command (or hide the window, but there's been mixed success on that front), or choose a different command. I've found timeout works, but sleep and choice (ie. something like this) do not work.
Here's what you will need to add:
const { app } = require("electron");
const { spawn } = require("child_process");
const path = require("path");
...
app.on("will-quit", async (event) => {
const folder = path.join(app.getPath("appData"), app.name);
// Wait 3 seconds, navigate into your app folder and delete all files/folders
const cmd = `ping localhost -n 3 > nul 2>&1 && pushd "${folder}" && (rd /s /q "${folder}" 2>nul & popd)`;
// shell = true prevents EONENT errors
// stdio = ignore allows the pipes to continue processing w/o handling command output
// detached = true allows the command to run once your app is [completely] shut down
const process = spawn(cmd, { shell: true, stdio: "ignore", detached: true });
// Prevents the parent process from waiting for the child (this) process to finish
process.unref();
});
As another user mentioned, there's a method available on your electron session that is a native API that clears all of these files/folders. However, it returns a promise, and I could not figure out how to execute this synchronously within one of these app-events.
Reference #1

Startup Sequence of Nodejs Apps using PM2

I am using the command pm2 start apps.json to start multiple apps in a single command. These apps are defined in apps.json:
{
"apps": [
{
"name": "foo",
"script": "./foo.js",
},
{
"name": "bar",
"script": "./bar.js",
},
{
"name": "baz",
"script": "./baz.js",
}
]
}
Question: Is it possible to define the startup sequence, such that foo.js has to finish starting first before bar.js and baz.js can start?
For example, foo.js can perform a graceful start, running process.send('ready') to change its pm2 status to online. Only then will bar.js and baz.js be started by pm2. This will be similar to Docker Compose's depend_on parameter.
No such thing can be done via the configuration file only, but PM2 has a programmatic api which allows you to perform IPC (Inter-Process communication).
The following methods are the ones to work with :
pm2.list To list the processes that are running and get their names / IDs
pm2.launchBus For the processes that will receive information and react in consequence
pm2.sendDataToProcessId For sending informations to another process
This way you can run multiple scripts and make one wait for another. Once a script recieve a message on the message bus, it can start a process with pm2.start
Here's a piece of PSEUDO-CODE to illustrate my point :
const pm2 = require('pm2');
pm2.connect(() => {
pm2.list(function(err, processes) {
const fooProcess = processes.find(p => p.name == 'foo');
pm2.launchBus((err, bus) => {
bus.on('process:msg', packet => {
if (packet.startBar === true) {
pm.start({ script: 'bar.js' }, (err, apps) => { ... })
}
});
bus.on('error', console.error);
});
});
});
In another script, you would have the following :
pm2.sendDataToProcessId(barProcessID, {
data : { startBar : true },
topic: 'process:msg'
}, (err, res) => console.log(err, res));
Best regards
A little hack, you can write: "start": "pm2 start server.js && pm2 start server1.js" inside package.json. If you'd run it as npm start, it'll run the start script, similarly, you can create the script to stop it.
If not then you can also use child_process [it comes by default with nodejs] to run commands from inside your script by using childProcess.exec('pm2 start server.js && pm2 start server1.js');

How can Gulp be restarted upon each Gulpfile change?

I am developing a Gulpfile. Can it be made to restart as soon as it changes? I am developing it in CoffeeScript. Can Gulp watch Gulpfile.coffee in order to restart when changes are saved?
You can create a task that will gulp.watch for gulpfile.js and simply spawn another gulp child_process.
var gulp = require('gulp'),
argv = require('yargs').argv, // for args parsing
spawn = require('child_process').spawn;
gulp.task('log', function() {
console.log('CSSs has been changed');
});
gulp.task('watching-task', function() {
gulp.watch('*.css', ['log']);
});
gulp.task('auto-reload', function() {
var p;
gulp.watch('gulpfile.js', spawnChildren);
spawnChildren();
function spawnChildren(e) {
// kill previous spawned process
if(p) { p.kill(); }
// `spawn` a child `gulp` process linked to the parent `stdio`
p = spawn('gulp', [argv.task], {stdio: 'inherit'});
}
});
I used yargs in order to accept the 'main task' to run once we need to restart. So in order to run this, you would call:
gulp auto-reload --task watching-task
And to test, call either touch gulpfile.js or touch a.css to see the logs.
I created gulper that is gulp.js cli wrapper to restart gulp on gulpfile change.
You can simply replace gulp with gulper.
$ gulper <task-name>
I use a small shell script for this purpose. This works on Windows as well.
Press Ctrl+C to stop the script.
// gulpfile.js
gulp.task('watch', function() {
gulp.watch('gulpfile.js', process.exit);
});
Bash shell script:
# watch.sh
while true; do
gulp watch;
done;
Windows version: watch.bat
#echo off
:label
cmd /c gulp watch
goto label
I was getting a bunch of EADDRINUSE errors with the solution in Caio Cunha's answer. My gulpfile opens a local webserver with connect and LiveReload. It appears the new gulp process briefly coexists with the old one before the older process is killed, so the ports are still in use by the soon-to-die process.
Here's a similar solution which gets around the coexistence problem, (based largely on this):
var gulp = require('gulp');
var spawn = require('child_process').spawn;
gulp.task('gulp-reload', function() {
spawn('gulp', ['watch'], {stdio: 'inherit'});
process.exit();
});
gulp.task('watch', function() {
gulp.watch('gulpfile.js', ['gulp-reload']);
});
That works fairly well, but has one rather serious side-effect: The last gulp process is disconnected from the terminal. So when gulp watch exits, an orphaned gulp process is still running. I haven't been able to work around that problem, the extra gulp process can be killed manually, or just save a syntax error to gulpfile.js.
I've been dealing with the same problem and the solution in my case was actually very simple. Two things.
npm install nodemon -g (or locally if you prefer)
run with cmd or create a script in packages like this:
"dev": "nodemon --watch gulpfile.js --exec gulp"
The just type npm run dev
--watch specifies the file to keep an eye on. --exec says execute next in line and gulp is your default task. Just pass in argument if you want non default task.
Hope it helps.
EDIT : Making it fancy ;)
Now while the first part should achieve what you were after, in my setup I've needed to add a bit more to make it really user friend. What I wanted was
First open the page.
Look for changes in gulpfile.js and restart gulp if there are any
Gulp it up so keep an eye on files, rebuild and hot reload
If you only do what I've said in the first part, it will open the page every time. To fix it, create a gulp task that will open the page. Like this :
gulp.task('open', function(){
return gulp
.src(config.buildDest + '/index.html')
.pipe(plugins.open({
uri: config.url
}));
Then in my main tasks I have :
gulp.task('default', ['dev-open']);
gulp.task('dev-open', function(done){
plugins.sequence('build', 'connect', 'open', 'watch', done);
});
gulp.task('dev', function(done){
plugins.sequence('build', 'connect', 'watch', done);
});
Then modifying your npm scripts to
"dev": "gulp open & nodemon --watch gulpfile.js --watch webpack.config.js --exec gulp dev"
Will give you exactly what you want. First open the page and then just keep live reloading. Btw for livereload I use the one that comes with connect which always uses the same port. Hope it works for you, enjoy!
Another solution for this is to refresh the require.cache.
var gulp = require('gulp');
var __filenameTasks = ['lint', 'css', 'jade'];
var watcher = gulp.watch(__filename).once('change', function(){
watcher.end(); // we haven't re-required the file yet
// so is the old watcher
delete require.cache[__filename];
require(__filename);
process.nextTick(function(){
gulp.start(__filenameTasks);
});
});
I know this is a very old question, but it's a top comment on Google, so still very relevant.
Here is an easier way, if your source gulpfile.js is in a different directory than the one in use. (That's important!) It uses the gulp modules gulp-newer and gulp-data.
var gulp = require('gulp' )
, data = require('gulp-data' )
, newer = require('gulp-newer' )
, child_process = require('child_process')
;
gulp.task( 'gulpfile.js' , function() {
return gulp.src( 'sources/gulpfile.js' ) // source
.pipe( newer( '.' ) ) // check
.pipe( gulp.dest( '.' ) ) // write
.pipe( data( function(file) { // reboot
console.log('gulpfile.js changed! Restarting gulp...') ;
var t , args = process.argv ;
while ( args.shift().substr(-4) !== 'gulp' ) { t=args; }
child_process.spawn( 'gulp' , args , { stdio: 'inherit' } ) ;
return process.exit() ;
} ) )
;
} ) ;
It works like this:
Trick 1: gulp-newer only executes the following pipes, if the source file is newer than the current one. This way we make sure, there's no reboot-loop.
The while loop removes everything before and including the gulp command from the command string, so we can pass through any arguments.
child_process.spawn spawns a new gulp process, piping input output and error to the parent.
Trick 2: process.exit kills the current process. However, the process will wait to die until the child process is finished.
There are many other ways of inserting the restart function into the pipes.
I just happen to use gulp-data in every of my gulpfiles anyway. Feel free to comment your own solution. :)
Here's another version of #CaioToOn's reload code that is more in line with normal Gulp task procedure. It also does not depend on yargs.
Require spawn and initilaize the process variable (yargs is not needed):
var spawn = require('child_process').spawn;
var p;
The default gulp task will be the spawner:
gulp.task('default', function() {
if(p) { p.kill(); }
// Note: The 'watch' is the new name of your normally-default gulp task. Substitute if needed.
p = spawn('gulp', ['watch'], {stdio: 'inherit'});
});
Your watch task was probably your default gulp task. Rename it to watch and add a gulp.watch()for watching your gulpfile and run the default task on changes:
gulp.task('watch', ['sass'], function () {
gulp.watch("scss/*.scss", ['sass']);
gulp.watch('gulpfile.js', ['default']);
});
Now, just run gulp and it will automatically reload if you change your gulpfile!
try this code (only win32 platform)
gulp.task('default', ['less', 'scripts', 'watch'], function(){
gulp.watch('./gulpfile.js').once('change' ,function(){
var p;
var childProcess = require('child_process');
if(process.platform === 'win32'){
if(p){
childProcess.exec('taskkill /PID' + p.id + ' /T /F', function(){});
p.kill();
}else{
p = childProcess.spawn(process.argv[0],[process.argv[1]],{stdio: 'inherit'});
}
}
});
});
A good solution for Windows, which also works well with Visual Studio task runner.
/// <binding ProjectOpened='auto-watchdog' />
const spawn = require('child-proc').spawn,
configPaths = ['Gulpconfig.js', 'bundleconfig.js'];
gulp.task('watchdog', function () {
// TODO: add other watches here
gulp.watch(configPaths, function () {
process.exit(0);
});
});
gulp.task('auto-watchdog', function () {
let p = null;
gulp.watch(configPaths, spawnChildren);
spawnChildren();
function spawnChildren() {
const args = ['watchdog', '--color'];
// kill previous spawned process
if (p) {
// You might want to trigger a build as well
args.unshift('build');
setTimeout(function () {
p.kill();
}, 1000);
}
// `spawn` a child `gulp` process linked to the parent `stdio`
p = spawn('gulp', args, { stdio: 'inherit' });
}
});
Main changes compared to other answers:
Uses child-proc because child_process fails on Windows.
The watchdog exits itself on changes of files because in Windows the gulp call is wrapped in a batch script. Killing the batch script wouldn't kill gulp itself causing multiple watches to be spawned over time.
Build on change: Usually a gulpfile change also warrants rebuilding the project.
Install nodemon globally: npm i -g nodemon
And add in your .bashrc (or .bash_profile or .profile) an alias:
alias gulp='nodemon --watch gulpfile.js --watch gulpfile.babel.js --quiet --exitcrash --exec gulp'
This will watch for file gulpfile.js and gulpfile.babel.js changes. (see Google)
P.S. This can be helpful for endless tasks (like watch) but not for single run tasks. I mean it uses watch so it will continue process even after gulp task is done. ;)
Here's a short version that's easy to understand that you can set as a default task so you just need to type "gulp":
gulp.task('watch', function() {
const restartingGulpProcessCmd = 'while true; do gulp watch2 --colors; done;';
const restartingGulpProcess = require('child_process').exec(restartingGulpProcessCmd);
restartingGulpProcess.stdout.pipe(process.stdout);
restartingGulpProcess.stderr.pipe(process.stderr);
});
gulp.task('watch2', function() {
gulp.watch(['config/**.js', 'webpack.config.js', './gulpfile.js'],
() => {
console.log('Config file changed. Quitting so gulp can be restarted.');
process.exit();
});
// Add your other watch and build commands here
}
gulp.task('default', ['watch']);
I spent a whole day trying to make this work on Windows / Gulp 4.0.2, and I (finally) made it...
I used some solutions from people on this page and from one other page. It's all there in the comments...
Any change in any function inside "allTasks" will take effect on gulpfile.js (or other watched files) save...
There are some useless comments and console.logs left, feel free to remove them... ;)
const { gulp, watch, src, dest, series, parallel } = require("gulp");
const spawn = require('child_process').spawn;
// This function contains all that is necessary: start server, watch files...
const allTasks = function (callback) {
console.log('==========');
console.log('========== STARTING THE GULP DEFAULT TASK...');
console.log('========== USE CTRL+C TO STOP THE TASK');
console.log('==========');
startServer();
// other functions (watchers) here
// *** Thanks to Sebazzz ***
// Stop all on gulpfile.js change
watch('gulpfile.js', function (callback) {
callback(); // avoid "task didn't complete" error
process.exit();
});
callback(); // avoid "task didn't complete" error
}
// Restart allTasks
// ********************************************
// CALL GULPDEFAULT WITH THE GULP DEFAULT TASK:
// export.default = gulpDefault
// ********************************************
const gulpDefault = function (callback) {
let p = null;
watch('gulpfile.js', spawnChildren);
// *** Thanks to Sphinxxx: ***
// New behavior in gulp v4: The watcher function (spawnChildren()) is passed a callback argument
// which must be called after spawnChildren() is done, or else the auto-reload task
// never goes back to watching for further changes (i.e.the reload only works once).
spawnChildren(callback);
function spawnChildren(callback) {
/*
// This didn't do anything for me, with or without the delay,
// so I left it there, but commented it out, together with the console.logs...
// kill previous spawned process
if (p) {
// You might want to trigger a build as well
//args.unshift('build');
setTimeout(function () {
console.log('========== p.pid before kill: ' + p.pid); // a random number
console.log('========== p before kill: ' + p); // [object Object]
p.kill();
console.log('========== p.pid after kill: ' + p.pid); // the same random number
console.log('========== p after kill: ' + p); // still [object Object]
}, 1000);
}
*/
// `spawn` a child `gulp` process linked to the parent `stdio`
// ['watch'] is the task that calls the main function (allTasks):
// exports.watch = allTasks;
p = spawn('gulp', ['watch'], { stdio: 'inherit', shell: true });
// *** Thanks to people from: ***
// https://stackoverflow.com/questions/27688804/how-do-i-debug-error-spawn-enoent-on-node-js
// Prevent Error: spawn ENOENT
// by passing "shell: true" to the spawn options
callback(); // callback called - thanks to Sphinxxx
}
}
exports.default = gulpDefault;
exports.watch = allTasks;
Install gulp-restart
npm install gulp-restart
This code will work for you.
var gulp = require('gulp');
var restart = require('gulp-restart');
gulp.task('watch', function() {
gulp.watch(['gulpfile.js'], restart);
})
it will restart gulp where you do changes on the gulpfile.js

Categories

Resources