I have a set of Gulp (v4) tasks that do things like compile Webpack and Sass, compress images, etc. These tasks are automated with a "watch" task while I'm working on a project.
When my watch task is running, if I save a file, the "default" set of tasks gets ran. If I save again before the "default" task finishes, another "default" task begins, resulting in multiple "default" tasks running concurrently.
I've fixed this by checking that the "default" task isn't running before triggering a new one, but this has caused some slow down issues when I save a file, then rapidly make another minor tweak, and save again. Doing this means that only the first change gets compiled, and I have to wait for the entire process to finish, then save again for the new change to get compiled.
My idea to circumvent this is to kill all the old "default" tasks whenever a new one gets triggered. This way, multiples of the same task won't run concurrently, but I can rely on the most recent code being compiled.
I did a bit of research, but I couldn't locate anything that seemed to match my situation.
How can I kill all the "old" gulp tasks, without killing the "watch" task?
EDIT 1: Current working theory is to store the "default" task set as a variable and somehow use that to kill the process, but that doesn't seem to work how I expected it to. I've placed my watch task below for reference.
// watch task, runs through all primary tasks, triggers when a file is saved
GULP.task("watch", () => {
// set up a browser_sync server, if --sync is passed
if (PLUGINS.argv.sync) {
CONFIG_MODULE.config(GULP, PLUGINS, "browsersync").then(() => {
SYNC_MODULE.sync(GULP, PLUGINS, CUSTOM_NOTIFIER);
});
}
// watch for any changes
const WATCHER = GULP.watch("src/**/*");
// run default task on any change
WATCHER.on("all", () => {
if (!currently_running) {
currently_running = true;
GULP.task("default")();
}
});
// end the task
return;
});
https://github.com/JacobDB/new-site/blob/4bcd5e82165905fdc05d38441605087a86c7b834/gulpfile.js#L202-L224
EDIT 2: Thinking about this more, maybe this is more a Node.js question than a Gulp question – how can I stop a function from processing from outside that function? Basically I want to store the executing function as a variable somehow, and kill it when I need to restart it.
There are two ways to set up a Gulp watch. They look very similar, but have the important difference that one supports queueing (and some other features) and the other does not.
The way you're using, which boils down to
const watcher = watch(<path glob>)
watcher.on(<event>, function(path, stats) {
<event handler>
});
uses the chokidar instance that underlies Gulp's watch().
When using the chokidar instance, you do not have access to the Gulp watch() queue.
The other way to run a watch boils down to
function watch() {
gulp.watch(<path>, function(callback) {
<handler>
callback();
});
}
or more idiomatically
function myTask = {…}
const watch = () => gulp.watch(<path>, myTask)
Set up like this watch events should queue the way you're expecting, without your having to do anything extra.
In your case, that's replacing your const WATCHER = GULP.watch("src/**/*"); with
GULP.watch("src/**/*", default);
and deleting your entire WATCHER.on(…);
Bonus 1
That said, be careful with recursion there. I'm extrapolating from your use of a task named "default"… You don't want to find yourself in
const watch = () => gulp.watch("src/**/*", default);
const default = gulp.series(clean, build, serve, watch);
Bonus 2
Using the chokidar instance can be useful for logging:
function handler() {…}
const watcher = gulp.watch(glob, handler);
watcher.on('all', (path, stats) => {
console.log(path + ': ' + stats + 'detected') // e.g. "src/test.txt: change detected" is logged immediately
}
Bonus 3
Typically Browsersync would be set up outside of the watch function, and the watch would end in reloading the server. Something like
…
import browserSync from 'browser-sync';
const server = browserSync.create();
function serve(done) {
server.init(…);
done();
}
function reload(done) {
server.reload();
done();
}
function changeHandler() {…}
const watch = () => gulp.watch(path, gulp.series(changeHandler, reload);
const run = gulp.series(serve, watch);
try installing gulp restart
npm install gulp-restart
As #henry stated, if you switch to the non-chokidar version you get queuing for free (because it is the default). See no queue with chokidar.
But that doesn't speed up your task completion time. There was an issue requesting that the ability to stop a running task be added to gulp - how to stop a running task - it was summarily dealt with.
If one of your concerns is to speed up execution time, you can try the lastRun() function option. gulp lastRun documentation
Retrieves the last time a task was successfully completed during the
current running process. Most useful on subsequent task runs while a
watcher is running.
When combined with src(), enables incremental builds to speed up
execution times by skipping files that haven't changed since the last
successful task completion.
const { src, dest, lastRun, watch } = require('gulp');
const imagemin = require('gulp-imagemin');
function images() {
return src('src/images/**/*.jpg', { since: lastRun(images) })
.pipe(imagemin())
.pipe(dest('build/img/'));
}
exports.default = function() {
watch('src/images/**/*.jpg', images);
};
Example from the same documentation. In this case, if an image was successfully compressed during the current running task, it will not be re-compressed. Depending on your other tasks, this may cut down on your wait time for the queued tasks to finish.
Related
We are building an Electron app that allows users to supply their own 'modules' to run. We are looking for a way to require the modules but then delete or kill the modules if need be.
We have looked a few tutorials that seem to discuss this topic but we can't seem to get the modules to fully terminate. We explored this by using timers inside the modules and can observe the timers still running even after the module reference is deleted.
https://repl.it/repls/QuerulousSorrowfulQuery
index.js
// Load module
let Mod = require('./mod.js');
// Call the module function (which starts a setInterval)
Mod();
// Delete the module after 3 seconds
setTimeout(function () {
Mod = null;
delete Mod;
console.log('Deleted!')
}, 3000);
./mod.js
function Mod() {
setInterval(function () {
console.log('Mod log');
}, 1000);
}
module.exports = Mod;
Expected output
Mod log
Mod log
Deleted!
Actual output
Mod log
Mod log
Deleted!
Mod log
...
(continues to log 'Mod log' indefinitely)
Maybe we are overthinking it and maybe the modules won't be memory hogs, but the modules we load will have very intensive workloads and having the ability to stop them at will seems important.
Edit with real use-case
This is how we are currently using this technique. The two issues are loading the module in the proper fashion and unloading the module after it is done.
renderer.js (runs in a browser context with access to document, etc)
const webview = document.getElementById('webview'); // A webview object essentially gives us control over a webpage similar to how one can control an iframe in a regular browser.
const url = 'https://ourserver.com/module.js';
let mod;
request({
method: 'get',
url: url,
}, function (err, httpResponse, body) {
if (!err) {
mod = requireFromString(body, url); // Module is loaded
mod(webview); // Module is run
// ...
// Some time later, the module needs to be 'unloaded'.
// We are currently 'unloading' it by dereferencing the 'mod' variable, but as mentioned above, this doesn't really work. So we would like to have a way to wipe the module and timers and etc and free up any memory or resources it was using!
mod = null;
delete mod;
}
})
function requireFromString(src, filename) {
var Module = module.constructor;
var m = new Module();
m._compile(src, filename);
return m.exports;
}
https://ourserver.com/module.js
// This code module will only have access to node modules that are packaged with our app but that is OK for now!
let _ = require('lodash');
let obj = {
key: 'value'
}
async function main(webview) {
console.log(_.get(obj, 'key')) // prints 'value'
webview.loadURL('https://google.com') // loads Google in the web browser
}
module.exports = main;
Just in case anyone reading is not familiar with Electron, the renderer.js has access to 'webview' elements which are almost identical to iframes. This is why passing it to the 'module.js' will allow the module to access manipulate the webpage such as change URL, click buttons on that webpage, etc.
There is no way to kill a module and stop or close any resources that it is using. That's just not a feature of node.js. Such a module could have timers, open files, open sockets, running servers, etc... In addition node.js does not provide a means of "unloading" code that was once loaded.
You can remove a module from the module cache, but that doesn't affect the existing, already loaded code or its resources.
The only foolproof way I know of would be to load the user's module in a separate node.js app loaded as a child process and then you can exit that process or kill that process and then the OS will reclaim any resources it was using and unload everything from memory. This child process scheme also has the advantage that the user's code is more isolated from your main server code. You could even further isolate it by running this other process in a VM if you wanted to.
I'm writing a gulp task that copy sass files into a tmp folder and then create css.
function copy_sass(done) {
var conponments = setup.conponments;
var sassList = config.conponments.sass;
var mainPath = config.path.src.sass.main;
var rootPath = config.path.src.sass.root;
var source = getPaths(conponments, sassList, mainPath);// get a filtered list of path
var destination = config.path.tmp.sass_tmp;
copyPath(mainPath + 'mixin/**', destination + 'main/mixin/');
copyPath(mainPath + 'settings/**', destination + 'main/settings/');
copyPath(rootPath + 'style.scss', destination);
copyPath(source, destination + 'main/conponment/');
done();
};
function css_build(done) {
var source = config.path.tmp.sass_tmp + '**/*.scss';
var destination = config.path.tmp.css.root;
return src(source)
.pipe(bulkSass())
.pipe(sass())
.pipe(csscomb())
.pipe(cssbeautify({indent: ' '}))
.pipe(autoprefixer())
.pipe(gulp.dest(destination));
done();
};
function copyPath(source, destination) {
return src(source)
.pipe(dest(destination));
};
exports.getcss = series(
copy_sass,
css_build
);
exports.filter = filter_tmp_sass;
exports.css = css_build;
When I call functions in series with the task getcss, gulp don't seem to wait before the copy task is finished and css_build do nothing because the paths are not already copied.
When I launch the copy task and then the css task manually, all is working. So I think that the problem is that the copy_sass function is considered as finished before the end of the copyPath functions, and then css_build is launched before the paths are copied.
What I expect is that the getcss task wait until the copy_sass function and the copyPath function inside it are finished before launch css_build.
Node libraries handle asynchronicity in a variety of ways.
The Gulp file streams (usually started with src()) work asynchronously. This means, that they start some kind of work, but instantly return when being called. This is the reason why you always need to return the stream from the task, so that Gulp knows when the actual work of the task is finished, as you can read in the Gulp documentation:
When a stream, promise, event emitter, child process, or observable is returned from a task, the success or error informs gulp whether to continue or end.
Regarding your specific example, in your task copy_sass you call copyPath multiple times. The copying is started, but the methods instantly return without waiting for completion. Afterwards, the done callback is called telling Gulp that the task is done.
To ensure task completion, you need to return every single stream to Gulp. In your example, you could create a separate task for each copy operation and aggregate them via series() or even parallel():
function copyMixin() {
return src('...').pipe(dest(...))
}
function copySettings() {
return src('...').pipe(dest(...))
}
// ...
var copySass = parallel(copyMixin, copySettings, ...)
Is there any way to run a specific task when the task supplied by the user is not present in the gulpfile.
For example, if the user runs gulp build and the build task is not there in the gulpfile then a specific task (or default task, doesn't matter to me) should run.
As an analogy, consider the specified task as the 404 page for gulp.
gulp inherits from orchestrator, which contains a tasks instance variable that is a plain object with keys that are names of tasks added to the instance. If you replaced tasks with a proxy, you could have it return a default task, like default by using the get trap handler:
const gulp = require('gulp')
gulp.tasks = new Proxy(gulp.tasks, {
get (target, property) {
if (target.hasOwnProperty(property)) {
return target[property]
}
return target.default
}
})
That code can be run at anytime before the sequence of tasks is started, even after some tasks have already been added to the gulp instance.
Patrick's answer was really helpful. I had to modify a little since I was getting an ESLint warning and the following error -
TypeError: this.tasks.hasOwnProperty is not a function
So I thought to post my changes too. Here's my final code -
gulp.tasks = new Proxy(gulp.tasks, {
get: function(target, property) {
if (undefined !== target[property]) {
return target[property];
}
return target.default; //or target["<Custom task name>"]
}
});
I currently have three gulp tasks, leveraging gulp-watch. They are basic in nature, and are represented (simplified) as such...
var gulp = require('gulp');
var watch = require('gulp-watch');
gulp.task('watch:a', function() {
return watch('a/**/*.js', function () {
// [...]
});
});
gulp.task('watch:b', function() {
return watch('b/**/*.js', function () {
// [...]
});
});
gulp.task('watch:c', function() {
return watch('c/**/*.js', function () {
// [...]
});
});
However, with my current workflow, I'm forced to open three terminals, and fire them off individually.
Is there a way I can instead have one gulp task, which spawns three separate terminal windows with each task running? I have looked into child process but have not been able to craft a solution. Ideally, I'm visualizing something as such...
gulp.task('watch', function() {
launchProcess('gulp watch:a');
launchProcess('gulp watch:b');
launchProcess('gulp watch:c');
});
Where launchProcess has some magic so I can consolidate these into one command. I'm simply searching for convenience here, since there could be more than three processes. I cringe at the thought of manually firing tons of these processes off.
Here is my initial attempt, taken from Answer: Gulp – How Can I Open A New Tab In Terminal?, but this (just trying to fire one watcher) does not let my watcher task work as expected - nothing happens on a file change.
var exec = require('child_process').exec;
gulp.task('watch', function(cb) {
exec('gulp watch:a', function (err, stdout, stderr) {
console.log(stdout);
console.log(stderr);
cb(err);
});
});
Your own solution isn't really what I expected from your question because, as you said yourself, it's not opening new terminal tabs or anything.
If you're happy with that, the below line will have the same effect as your answer. It also avoids gulp.start() which isn't recommended for use by the authors of Gulp.
gulp.task('watch', ['watch:a', 'watch:b', 'watch:c']);
Or if possible you could always combine your watch tasks like below. Although you then lose the ability to run those tasks individually if that's something you want to do.
gulp.task('watch', function() {
watch('a/**/*.js', function () {
// [...]
});
watch('b/**/*.js', function () {
// [...]
});
watch('c/**/*.js', function () {
// [...]
});
});
I think I've found perhaps an unconventional way to pull this off, given the deprecation comments on this gulp.js project issue, but all testing indicates this will work as expected. I can simply call gulp.start(), which, I do not see in the gulp.js API docs. Hm, seems good to me...
gulp.task('watch', function() {
gulp.start('watch:a');
gulp.start('watch:b');
gulp.start('watch:c');
});
All tasks seems to be listening appropriately in a single terminal instance. I'll take it!
I currently use nodemon or supervisor for automatic server restarting and automatic test cases execution. But currently my requirement is to run specific test cases when certain files are changed. For example if app\models\user.js is modified, I want test\model\user-test.js to be executed.
I order to achieve that I need to identify which are the files that are modified. How can I achieve that using nodemon or supervisor?
I dont know if you can do that with nodemon or supervisor, but you always could write your own:
var watch = require('watch');
function methodToDoTestOnFile(file) {
//IMPLEMENT
}
watch.createMonitor(filesToWatch, function (monitor) {
monitor.on('changed', function (f) {
//do some test on f
methodToDoTestOnFile(f)
});
});