I currently have three gulp tasks, leveraging gulp-watch. They are basic in nature, and are represented (simplified) as such...
var gulp = require('gulp');
var watch = require('gulp-watch');
gulp.task('watch:a', function() {
return watch('a/**/*.js', function () {
// [...]
});
});
gulp.task('watch:b', function() {
return watch('b/**/*.js', function () {
// [...]
});
});
gulp.task('watch:c', function() {
return watch('c/**/*.js', function () {
// [...]
});
});
However, with my current workflow, I'm forced to open three terminals, and fire them off individually.
Is there a way I can instead have one gulp task, which spawns three separate terminal windows with each task running? I have looked into child process but have not been able to craft a solution. Ideally, I'm visualizing something as such...
gulp.task('watch', function() {
launchProcess('gulp watch:a');
launchProcess('gulp watch:b');
launchProcess('gulp watch:c');
});
Where launchProcess has some magic so I can consolidate these into one command. I'm simply searching for convenience here, since there could be more than three processes. I cringe at the thought of manually firing tons of these processes off.
Here is my initial attempt, taken from Answer: Gulp – How Can I Open A New Tab In Terminal?, but this (just trying to fire one watcher) does not let my watcher task work as expected - nothing happens on a file change.
var exec = require('child_process').exec;
gulp.task('watch', function(cb) {
exec('gulp watch:a', function (err, stdout, stderr) {
console.log(stdout);
console.log(stderr);
cb(err);
});
});
Your own solution isn't really what I expected from your question because, as you said yourself, it's not opening new terminal tabs or anything.
If you're happy with that, the below line will have the same effect as your answer. It also avoids gulp.start() which isn't recommended for use by the authors of Gulp.
gulp.task('watch', ['watch:a', 'watch:b', 'watch:c']);
Or if possible you could always combine your watch tasks like below. Although you then lose the ability to run those tasks individually if that's something you want to do.
gulp.task('watch', function() {
watch('a/**/*.js', function () {
// [...]
});
watch('b/**/*.js', function () {
// [...]
});
watch('c/**/*.js', function () {
// [...]
});
});
I think I've found perhaps an unconventional way to pull this off, given the deprecation comments on this gulp.js project issue, but all testing indicates this will work as expected. I can simply call gulp.start(), which, I do not see in the gulp.js API docs. Hm, seems good to me...
gulp.task('watch', function() {
gulp.start('watch:a');
gulp.start('watch:b');
gulp.start('watch:c');
});
All tasks seems to be listening appropriately in a single terminal instance. I'll take it!
Related
I have a set of Gulp (v4) tasks that do things like compile Webpack and Sass, compress images, etc. These tasks are automated with a "watch" task while I'm working on a project.
When my watch task is running, if I save a file, the "default" set of tasks gets ran. If I save again before the "default" task finishes, another "default" task begins, resulting in multiple "default" tasks running concurrently.
I've fixed this by checking that the "default" task isn't running before triggering a new one, but this has caused some slow down issues when I save a file, then rapidly make another minor tweak, and save again. Doing this means that only the first change gets compiled, and I have to wait for the entire process to finish, then save again for the new change to get compiled.
My idea to circumvent this is to kill all the old "default" tasks whenever a new one gets triggered. This way, multiples of the same task won't run concurrently, but I can rely on the most recent code being compiled.
I did a bit of research, but I couldn't locate anything that seemed to match my situation.
How can I kill all the "old" gulp tasks, without killing the "watch" task?
EDIT 1: Current working theory is to store the "default" task set as a variable and somehow use that to kill the process, but that doesn't seem to work how I expected it to. I've placed my watch task below for reference.
// watch task, runs through all primary tasks, triggers when a file is saved
GULP.task("watch", () => {
// set up a browser_sync server, if --sync is passed
if (PLUGINS.argv.sync) {
CONFIG_MODULE.config(GULP, PLUGINS, "browsersync").then(() => {
SYNC_MODULE.sync(GULP, PLUGINS, CUSTOM_NOTIFIER);
});
}
// watch for any changes
const WATCHER = GULP.watch("src/**/*");
// run default task on any change
WATCHER.on("all", () => {
if (!currently_running) {
currently_running = true;
GULP.task("default")();
}
});
// end the task
return;
});
https://github.com/JacobDB/new-site/blob/4bcd5e82165905fdc05d38441605087a86c7b834/gulpfile.js#L202-L224
EDIT 2: Thinking about this more, maybe this is more a Node.js question than a Gulp question – how can I stop a function from processing from outside that function? Basically I want to store the executing function as a variable somehow, and kill it when I need to restart it.
There are two ways to set up a Gulp watch. They look very similar, but have the important difference that one supports queueing (and some other features) and the other does not.
The way you're using, which boils down to
const watcher = watch(<path glob>)
watcher.on(<event>, function(path, stats) {
<event handler>
});
uses the chokidar instance that underlies Gulp's watch().
When using the chokidar instance, you do not have access to the Gulp watch() queue.
The other way to run a watch boils down to
function watch() {
gulp.watch(<path>, function(callback) {
<handler>
callback();
});
}
or more idiomatically
function myTask = {…}
const watch = () => gulp.watch(<path>, myTask)
Set up like this watch events should queue the way you're expecting, without your having to do anything extra.
In your case, that's replacing your const WATCHER = GULP.watch("src/**/*"); with
GULP.watch("src/**/*", default);
and deleting your entire WATCHER.on(…);
Bonus 1
That said, be careful with recursion there. I'm extrapolating from your use of a task named "default"… You don't want to find yourself in
const watch = () => gulp.watch("src/**/*", default);
const default = gulp.series(clean, build, serve, watch);
Bonus 2
Using the chokidar instance can be useful for logging:
function handler() {…}
const watcher = gulp.watch(glob, handler);
watcher.on('all', (path, stats) => {
console.log(path + ': ' + stats + 'detected') // e.g. "src/test.txt: change detected" is logged immediately
}
Bonus 3
Typically Browsersync would be set up outside of the watch function, and the watch would end in reloading the server. Something like
…
import browserSync from 'browser-sync';
const server = browserSync.create();
function serve(done) {
server.init(…);
done();
}
function reload(done) {
server.reload();
done();
}
function changeHandler() {…}
const watch = () => gulp.watch(path, gulp.series(changeHandler, reload);
const run = gulp.series(serve, watch);
try installing gulp restart
npm install gulp-restart
As #henry stated, if you switch to the non-chokidar version you get queuing for free (because it is the default). See no queue with chokidar.
But that doesn't speed up your task completion time. There was an issue requesting that the ability to stop a running task be added to gulp - how to stop a running task - it was summarily dealt with.
If one of your concerns is to speed up execution time, you can try the lastRun() function option. gulp lastRun documentation
Retrieves the last time a task was successfully completed during the
current running process. Most useful on subsequent task runs while a
watcher is running.
When combined with src(), enables incremental builds to speed up
execution times by skipping files that haven't changed since the last
successful task completion.
const { src, dest, lastRun, watch } = require('gulp');
const imagemin = require('gulp-imagemin');
function images() {
return src('src/images/**/*.jpg', { since: lastRun(images) })
.pipe(imagemin())
.pipe(dest('build/img/'));
}
exports.default = function() {
watch('src/images/**/*.jpg', images);
};
Example from the same documentation. In this case, if an image was successfully compressed during the current running task, it will not be re-compressed. Depending on your other tasks, this may cut down on your wait time for the queued tasks to finish.
So I'm trying to create a gulp workflow and I'd like to implement options for some tasks, like gulp copy-images --changed. Now, I've created a watch task that obviously watches all image files and it should start the copy-images with the --changed flag.
Ideally, I want to do something like this:
gulp.task('copy-images', function(){
// some code
});
gulp.task('watch', function(){
gulp.watch(config.images, ['copy-images --changed']);
});
I'm also very aware that I could do:
gulp.task('copy-images', function(){
// some code
});
gulp.task('copy-images-changed', function(){
// some code
});
gulp.task('watch', function(){
gulp.watch(config.images, ['copy-images']);
});
but this means duplicate code.
Anyone with a solution or maybe some advice?
Thanks in advance!
Gulp does not provide a built-in way of specifying options for tasks. You have to use an external options parser module like yargs. See this question for more on that topic.
This also means that passing something like ['copy-images --changed'] to gulp.watch() will not work. The entire string will just be interpreted as a task name.
The best approach for you would be to factor out the code of your task into a function and then call this function from both your task and your watch:
var argv = require('yargs').argv;
function copyImages(opts) {
if (opts.changed) {
// some code
} else {
// some other code
}
}
gulp.task('copy-images', function() {
copyImages(argv);
});
gulp.task('watch', function(){
gulp.watch(config.images, function() {
copyImages({changed:true});
});
});
The above should cover all of your bases:
gulp copy-images will execute //some other code.
gulp copy-images --changed will execute //some code.
gulp watch will execute //some code any time a watched file is changed.
I'm getting frustrated with part of a Yeoman Generator I'm building. As it's my first, I have no doubt I'm missing something obvious, but here it goes.
Simply put, I'm trying to log a message, Do Things™ and then log another message only when those things have been done.
Here's the method:
repos: function () {
var self = this;
this.log(highlightColour('Pulling down the repositories'));
// Skeleton
this.remote('user', 'skeleton', 'master', function(err, remote) {
if (!err) {
remote.bulkDirectory('.', self.destinationRoot());
} else {
self.log('\n');
self.log(alertColour('Failed to pull down Skeleton'));
repoErr = true;
}
});
//
// Three more near identical remote() tasks
//
if (!repoErr) {
self.log(successColour('Success!'));
self.log('\n');
} else {
self.log(alertColour('One or more repositories failed to download!'));
}
},
Each of the individual remote() tasks are working fine, but I get both the first and last self.log() messages before the file copying happens. It seems trivial, but I simply want the success message to come after everything has been completed.
For example, in the terminal I see:
Pulling down the repositories
Success!
file copying results
It should be:
Pulling down the repositories
file copying results
Success!
I thought it could be something to do with using this.async() with done() at the end of each remote() task, and I tried that, but whenever I do, none of the code fires at all.
I've even tried breaking everything (including the messages) into separate methods, but still no luck.
Such a simple goal, but I'm out of ideas! I'd be grateful for your help!
EDIT: In case you're wondering, I know the messages are coming first because any alerts regarding file conflicts are coming after the messages :)
This is not an issue related to Yeoman. You have asynchronous code, but you're handling it as if it was synchronous.
In the example you posted here, just do the logging as part of this.remote callback:
repos: function () {
var self = this;
this.log(highlightColour('Pulling down the repositories'));
// Skeleton
this.remote('user', 'skeleton', 'master', function(err, remote) {
if (!err) {
remote.bulkDirectory('.', self.destinationRoot());
self.log(successColour('Success!'));
self.log('\n');
} else {
self.log('\n');
self.log(alertColour('Failed to pull down Skeleton'));
self.log(alertColour('One or more repositories failed to download!'));
}
});
},
Maybe your actual use case is more complex; in this case you can use a module like async (or any other alternative) to handle more complex async flow. Either way, Yeoman doesn't provide helpers to handle asynchronous code as this is the bread and butter of Node.js.
I'm creating my first ever Grunt Plugin. I would really like to practice testing my plugin as I develop. I've selected mocha as my testing framework, as it seems popular, and setup my gruntfile to automatically watch when test files are altered and run them. This all looks good.
But, I've not found a lot of documentation on how to actually test a Grunt Plugin. I've looked at code of about a dozen different grunt plugins, especially the contrib ones, and they don't make a lot of sense to me.
As I try to test my code I'm trying to break things down into very specific functions. So, here is the basic structure of a plugin, with a function in it.
'use strict';
function testOutsideOfTask(){
return "i am outside";
}
module.exports = function(grunt) {
grunt.registerMultiTask('example_task', 'I do a thing.', function() {
function testInsideOfTask(){
return "i am inside";
}
});
};
I've included a couple of methods just explicitly to make sure my testing is working, and it's not. Neither of these methods seem to be accessible... How could I access both of them for testing? Here is the mocha test as I have it.
var grunt = require('grunt');
var assert = require("assert");
describe('testOutsideOfTask', function() {
it('do something', function() {
assert.equal("i am outside", testOutsideOfTask());
});
});
describe('testInsideOfTask', function() {
it('do something', function() {
assert.equal("i am inside", testInsideOfTask());
});
});
They both return something like this. So, somehow it just can't access the functions, however, when I look at other testing examples, they don't seem to specifically require the file that is being tested... For example https://github.com/gruntjs/grunt-contrib-clean/blob/master/test/clean_test.js
1) testOutsideOfTask should do something:
ReferenceError: testOutsideOfTask is not defined
Thanks very much!
You want to be testing functionality, ideally. Your Grunt plugin should end up as a wrapper around more abstract methods that do what you want but not necessarily tied to how Grunt works; then for the actual plugin you run those methods on each file, log some output etc. Something like this:
var lib = require('./lib.js');
module.exports = function(grunt) {
grunt.registerMultiTask('test', function() {
lib.doSomeMethod();
grunt.log.ok('Something happened');
});
}
So in that pseudo code you would want to actually test the doSomeMethod function and not that Grunt registered a task or logged to the CLI. That's already well tested.
A lot of Grunt plugins work in this way, in fact many are wrappers around existing node modules. For example, grunt-recess is a wrapper of twitter/recess:
https://github.com/sindresorhus/grunt-recess
https://github.com/twitter/recess
One of my own modules is more Grunt specific, but the tests for that focus on the actual functionality of the module. You can have a look at that here:
https://github.com/ben-eb/grunt-available-tasks/blob/master/test/lib/filterTasks.test.js
I'm also testing with Mocha. I'd recommend that you use grunt-mocha-test to run your tests through Grunt (as well as JSHint and others).
I'm building a custom Yeoman generator that installs a lot of pre-processed language compilers like CoffeeScript, LESS and Jade. In the Gruntfile that my generator creates I have a build task which compiles everything. However, until that build task is run at least once, the compiled HTML, CSS and Javascript files don't exist, which can be confusing if I try to run the grunt watch/connect server after freshly scaffolding.
What is the best way to have my generator run that Grunt build step at the end of the installation? The end event that's already being used to call this.installDependencies seems like the right place to do that, but how should I communicate with Grunt?
If you follow the stack, this.installDependencies eventually works its way down to https://github.com/yeoman/generator/blob/45258c0a48edfb917ecf915e842b091a26d17f3e/lib/actions/install.js#L36:
this.spawnCommand(installer, args, cb)
.on('error', cb)
.on('exit', this.emit.bind(this, installer + 'Install:end', paths))
.on('exit', function (err) {
if (err === 127) {
this.log.error('Could not find ' + installer + '. Please install with ' +
'`npm install -g ' + installer + '`.');
}
cb(err);
}.bind(this));
Chasing this down further, this.spawnCommand comes from https://github.com/yeoman/generator/blob/master/lib/actions/spawn_command.js:
var spawn = require('child_process').spawn;
var win32 = process.platform === 'win32';
/**
* Normalize a command across OS and spawn it.
*
* #param {String} command
* #param {Array} args
*/
module.exports = function spawnCommand(command, args) {
var winCommand = win32 ? 'cmd' : command;
var winArgs = win32 ? ['/c'].concat(command, args) : args;
return spawn(winCommand, winArgs, { stdio: 'inherit' });
};
In other words, in your Generator's code, you can call this.spawnCommand anytime, and pass it the arguments you wish the terminal to run. As in, this.spawnCommand('grunt', ['build']).
So then the next question is where do you put that? Thinking linearly, you can only trust that grunt build will work after all of your dependencies have been installed.
From https://github.com/yeoman/generator/blob/45258c0a48edfb917ecf915e842b091a26d17f3e/lib/actions/install.js#L67-69,
this.installDependencies accepts a callback, so your code might look like this:
this.on('end', function () {
this.installDependencies({
skipInstall: this.options['skip-install'],
callback: function () {
this.spawnCommand('grunt', ['build']);
}.bind(this) // bind the callback to the parent scope
});
});
Give it a shot! If all goes well, you should add some error handling on top of that new this.spawnCommand call to be safe.
I've used Stephen's great answer, implemented in the following way with a custom event to keep things tidy.
MyGenerator = module.exports = function MyGenerator(args, options, config) {
this.on('end', function () {
this.installDependencies({
skipInstall: options['skip-install'],
callback: function() {
// Emit a new event - dependencies installed
this.emit('dependenciesInstalled');
}.bind(this)
});
});
// Now you can bind to the dependencies installed event
this.on('dependenciesInstalled', function() {
this.spawnCommand('grunt', ['build']);
});
};
This question is a bit old already, but i still want to make this addition if somebody missed it. Post install processes are now way easier to implement. Have a look at the run loop and use the end method where you can run all the post install things.