I'm striving to create a Gulp task that does nothing except calling a custom function. No, I have, no source files, and no, I have no destination files. I just want to call a custom function in a standalone task, so I can have other tasks depending on it.
For the love of me, I've checked Google and SO and I couldn't find an example. The closest I've come with is this
var through = require('through2');
gulp.task(
'my-custom-task',
function ()
{
return through.obj(
function write(chunk, enc, callback) {
// here is where the custom function is called
myCustomFunction('foo', 'bar');
callback(null, chunk);
}
);
}
);
Now this does call myCustomFunction, but when I run the task with gulp my-custom-task, I can see the task starting but not finishing.
[10:55:37] Starting 'clean:tmp'...
[10:55:37] Finished 'clean:tmp' after 46 ms
[10:55:37] Starting 'my-custom-task'...
How should I write my task correctly?
If you just want a task that runs some function, then just do that:
gulp.task('my-custom-task', function () {
myCustomFunction('foo', 'bar');
});
If your function does something asynchronously, it should call a callback at the end, so gulp is able to know when it’s done:
gulp.task('my-custom-task', function (callback) {
myCustomFunction('foo', 'bar', callback);
});
As for why your solution does not work, using through is a way to work with streams. You can use it to create handlers which you can .pipe() into gulp streams. Since you have nothing actually using gulp streams, there is no need for you to create a stream handler and/or use through here.
Related
I am working with v4 gulp and converting my tasks to functions. I have a simple clean function that executes a parallel task.
const clean_server = () => del('build/server/*');
const clean_client = () => del('build/client/*');
export function clean(done) {
gulp.parallel(clean_server, clean_client);
done();
}
When I call done() the way I'm calling above, which is also suggested from the docs https://gulpjs.com/docs/en/getting-started/creating-tasks, I see the task get initiated, however, it doesn't actually complete the task.
However, when I change it to:
export function clean(done) {
gulp.parallel(clean_server, clean_client)(done);
}
This way works.
So my question is, why doesn't the first way as suggested by the docs complete the async task?
Because done must be passed as a callback to parallel. Either like the way you did on or like the following
gulp.parallel(done => {
clean_server()
clean_client();
done();
})
I am pretty new to Nodejs and come from Java background. I apologize for the long question. I am working on an standalone Nodejs application which has to do steps sequentially. Below are the steps:
Step1: The application has to call the external-server-A or else retry
Step2: Once the above call is success, it has to call
external-server-b by taking the response on Step1 or else retry.
Step3: Once the above call is success, it has to invoke local module
by taking the response-of-step2 and call a function.
Not to combine all the steps in 1 JS page, I would like to write the functions related to above steps in different JS-pages and import them via require(). I am not sure how to call them sequentially.
Should I have the require(./step2), require(./step3) in the call-back code-block of step1 function and step2 functions.
Thanks in advance for the help.
You will want to require step2 and step3 at the top of your page, but expose them as a function which you can execute at a later time. You can also use promises to help you with writing your async code. For example:
// Step one
const step2 = require('./step2');
const step3 = require('./step3');
function someAsyncCallToExternalServerA() {
/*
Return a promise here which resolves to when
your call to external server A call is successful
*/
}
someAsyncCallToExternalServerA()
.then(function(serverAResults) { // I don't know if you need the serverA results or not
// This will pass the results from the step2 success to the step3 function
return step2.then(step3);
})
.then(function() {
console.log('All done!');
})
.catch(function(err) {
console.log('Failed: ', err);
})
One way to go is to use various callbacks to trigger what you want, when you want.
Imagine two steps:
function step1(onSuccess, onError) {
iDoSomethingAsync(function (err) {
if (err) onError()
else onSuccess()
}
}
function step2(onSuccess, onError) {
iDoSomethingElseAsync(function (err) {
if (err) onError()
else onSuccess()
}
}
Then you can simply chain calls like that:
step1(step2, step1)
step 1 is called, do something, if something return no error, it'll call step2. If we are in error it'll call step1 again.
In async programming, you have to understand that when calling someFunc(callback), someFunc HAVN'T finished his job in the next line. But somefunc WILL HAVE finished his job when callback is called.
It's up to you to do whatever you want with callback, because you are guaranted that the function has done his work (or errored)
Taking back the step1/step2 example, here is the same function, calling back step1 with 1 sec delay in case of an error:
step1(step2, setTimeout.bind(this, step1, 1000))
Once you think in the correct way, it's pretty simple isn't it? If you are coming from java, consider it's a mix between lambdas, Tasks and Futures/Promises.
Also, as the other answer pointed, using a library like promise will help you write cleaner code (and I recommend it as my example isn't clean at all), but you still need to understand how everything works.
I'm new to Gulp (and not very comfortable with js).
When I use
gulp.task('sass', function () {
gulp
.src('myfile.scss')
.pipe(sourcemaps.init())
.pipe(sass(myoptions))
.pipe(sourcemaps.write('./'))
.pipe(gulp.dest('mypath'))
.pipe(browserSync.stream({match: '**/*.css'}));
});
compilation is made in a few ms
But When i use
gulp.task('sass', function () {
return gulp
...
});
it take several seconds to compile.
Can someone explain me why ?
Thanks.
Gulp uses orchestrator to execute the tasks. Your task returns a promise or a stream (in your case it's a stream), which is used for sequencing.
When you return nothing, the caller can't know that your task isn't finished, which has at least 2 impacts:
you may think it's finished (from the log) before it really is
following tasks may start too soon, and might even use an old version of the compiled CSS data
I'm using Gulp in my small project in order to run tests and lint my code. When any of those tasks fail, Gulp always exits with return code 0. If I run jshint by hand, it exits with non-zero code as it should.
Here's my very simple gulpfile.
Do I need to somehow explicitly tell Gulp to return a meaningful value? Is this Gulp's fault, or maybe the gulp-jshint and gulp-jasmine plugins are to blame?
You need to 'return gulp.src(...' so that the task waits for the returned stream.
EDIT
Gulp tasks are asynchronous by nature. The actual task is not executed yet at the time of 'gulp.src(...).pipe(...);'. In your example, the gulp tasks mark their results as success before the actual tasks are executed.
There are some ways to make gulp to wait for your actual task. Use a callback or return a stream or promise.
https://github.com/gulpjs/gulp/blob/master/docs/API.md#async-task-support
The easiest way is just returning the stream of 'gulp.src(...).pipe(...)'. If gulp task gets a stream, it will listen to 'end' event and 'error' event. They corresponds to return code 0 and 1. So, a full example for your 'lint' task would be:
gulp.task('lint', function () {
return gulp.src('./*.js')
.pipe(jshint('jshintrc.json'))
.pipe(jshint.reporter('jshint-stylish'));
});
A bonus is that now you can measure the actual time spent on your tasks.
#robrich is right, you have to keep track of exit codes yourself, but there's no need for a forceful approach. The process global is an EventEmitter which you can bind your exit function to.
var exitCode = 0
gulp.task('test', function (done) {
return require('child_process')
.spawn('npm', ['test'], {stdio: 'pipe'})
.on('close', function (code, signal) {
if (code) exitCode = code
done()
})
})
gulp.on('err', function (err) {
process.emit('exit') // or throw err
})
process.on('exit', function () {
process.nextTick(function () {
process.exit(exitCode)
})
})
A fix has been introduced in this commit.
It works something like this:
gulp.src("src/**/*.js")
.pipe(jshint())
.pipe(jshint.reporter("default"))
.pipe(jshint.reporter("fail"));
I have been using it on circleci and it works a treat!
gulp-jshint has been struggling with how to fail the build on jshint fail. On the one hand, we can crash the build inside jshint, but then you never get to the reporter. On the other hand, requiring the reporter to fail the build isn't part of the default reporters. I generally hook up my own reporter that keeps track of fails, and .on('end', function () { will process.exit(1). It's quite brute force, but it works like a charm. See https://github.com/wearefractal/gulp-jshint/issues/10
Similar to Andy Piper answer above, I have found this module stream-combiner2 to be useful when running a sequence of tasks to ensure and exit code is emitted if there is an error somewhere. Can be used something like this
var combiner = require('stream-combiner2');
var tasks = combiner.obj([
gulp.src(files),
task1(options),
task2(options),
gulp.dest('path/to/dest')
]);
tasks.on('error', function () {
process.exit(1)
});
gulp.task('default', function(done) {
return gulp.src( ... ).pipe(jasmine( ... )).on('error', function(err) { done(err); } )
});
Works for me
Is it possible to load a Node.js module asynchronously?
This is the standard code:
var foo = require("./foo.js"); // waiting for I/O
foo.bar();
But I would like to write something like this:
require("./foo.js", function(foo) {
foo.bar();
});
// doing something else while the hard drive is crunching...
Is there a way how to do this? Or is there a good reason why callbacks in require aren't supported?
While require is synchronous, and Node.js does not provide an asynchronous variant out of the box, you can easily build one for yourself.
First of all, you need to create a module. In my example I am going to write a module that loads data asynchronously from the file system, but of course YMMV. So, first of all the old-fashioned, not wanted, synchronous approach:
var fs = require('fs');
var passwords = fs.readFileSync('/etc/passwd');
module.exports = passwords;
You can use this module as usual:
var passwords = require('./passwords');
Now, what you want to do is turn this into an asynchronous module. As you can not delay module.exports, what you do instead is instantly export a function that does the work asynchronously and calls you back once it is done. So you transform your module into:
var fs = require('fs');
module.exports = function (callback) {
fs.readFile('/etc/passwd', function (err, data) {
callback(err, data);
});
};
Of course you can shorten this by directly providing the callback variable to the readFile call, but I wanted to make it explicit here for demonstration purposes.
Now when you require this module, at first, nothing happens, as you only get a reference to the asynchronous (anonymous) function. What you need to do is call it right away and provide another function as callback:
require('./passwords')(function (err, passwords) {
// This code runs once the passwords have been loaded.
});
Using this approach you can, of course, turn any arbitrary synchronous module initialization to an asynchronous one. But the trick is always the same: Export a function, call it right from the require call and provide a callback that continues execution once the asynchronous code has been run.
Please note that for some people
require('...')(function () { ... });
may look confusing. Hence it may be better (although this depends on your actual scenario) to export an object with an asynchronous initialize function or something like that:
var fs = require('fs');
module.exports = {
initialize: function (callback) {
fs.readFile('/etc/passwd', function (err, data) {
callback(err, data);
});
}
};
You can then use this module by using
require('./passwords').initialize(function (err, passwords) {
// ...
});
which may be slightly better readable.
Of course you can also use promises or any other asynchronous mechanism which makes your syntax look nicer, but in the end, it (internally) always comes down to the pattern I just described here. Basically, promises & co. are nothing but syntactic sugar over callbacks.
Once you build your modules like this, you can even build a requireAsync function that works like you initially suggested in your question. All you have to do is stick with a name for the initialization function, such as initialize. Then you can do:
var requireAsync = function (module, callback) {
require(module).initialize(callback);
};
requireAsync('./passwords', function (err, passwords) {
// ...
});
Please note, that, of course, loading the module will still be synchronous due to the limitations of the require function, but all the rest will be asynchronous as you wish.
One final note: If you want to actually make loading modules asynchronous, you could implement a function that uses fs.readFile to asynchronously load a file, and then run it through an eval call to actually execute the module, but I'd highly recommend against this: One the one hand, you lose all the convenience features of request such as caching & co., on the other hand you'll have to deal with eval - and as we all know, eval is evil. So don't do it.
Nevertheless, if you still want to do it, basically it works like this:
var requireAsync = function (module, callback) {
fs.readFile(module, { encoding: 'utf8' }, function (err, data) {
var module = {
exports: {}
};
var code = '(function (module) {' + data + '})(module)';
eval(code);
callback(null, module);
});
};
Please note that this code is not "nice", and that it lacks any error handling, and any other capabilities of the original require function, but basically, it fulfills your demand of being able to asynchronously load synchronously designed modules.
Anyway, you can use this function with a module like
module.exports = 'foo';
and load it using:
requireAsync('./foo.js', function (err, module) {
console.log(module.exports); // => 'foo'
});
Of course you can export anything else as well. Maybe, to be compatible with the original require function, it may be better to run
callback(null, module.exports);
as last line of your requireAsync function, as then you have direct access to the exports object (which is the string foo in this case). Due to the fact that you wrap the loaded code inside of an immediately executed function, everything in this module stays private, and the only interface to the outer world is the module object you pass in.
Of course one can argue that this usage of evil is not the best idea in the world, as it opens up security holes and so on - but if you require a module, you basically do nothing else, anyway, than eval-uating it. The point is: If you don't trust the code, eval is the same bad idea as require. Hence in this special case, it might be fine.
If you are using strict mode, eval is no good for you, and you need to go with the vm module and use its runInNewContext function. Then, the solution looks like:
var requireAsync = function (module, callback) {
fs.readFile(module, { encoding: 'utf8' }, function (err, data) {
var sandbox = {
module: {
exports: {}
}
};
var code = '(function (module) {' + data + '})(module)';
vm.runInNewContext(code, sandbox);
callback(null, sandbox.module.exports); // or sandbox.module…
});
};
The npm module async-require can help you to do this.
Install
npm install --save async-require
Usage
var asyncRequire = require('async-require');
// Load script myModule.js
asyncRequire('myModule').then(function (module) {
// module has been exported and can be used here
// ...
});
The module uses vm.runInNewContext(), a technique discussed in the accepted answer. It has bluebird as a dependency.
(This solution appeared in an earlier answer but that was deleted by review.)
Yes - export function accepting callback or maybe even export full featured promise object.
// foo.js + callback:
module.exports = function(cb) {
setTimeout(function() {
console.log('module loaded!');
var fooAsyncImpl = {};
// add methods, for example from db lookup results
fooAsyncImpl.bar = console.log.bind(console);
cb(null, fooAsyncImpl);
}, 1000);
}
// usage
require("./foo.js")(function(foo) {
foo.bar();
});
// foo.js + promise
var Promise = require('bluebird');
module.exports = new Promise(function(resolve, reject) {
// async code here;
});
// using foo + promises
require("./foo.js").then(function(foo) {
foo.bar();
});
Andrey's code below is the simplest answer which works, but his had a small mistake so I'm posting the correction here as answer. Also, I'm just using callbacks, not bluebird / promises like Andrey's code.
/* 1. Create a module that does the async operation - request etc */
// foo.js + callback:
module.exports = function(cb) {
setTimeout(function() {
console.log('module loaded!');
var foo = {};
// add methods, for example from db lookup results
foo.bar = function(test){
console.log('foo.bar() executed with ' + test);
};
cb(null, foo);
}, 1000);
}
/* 2. From another module you can require the first module and specify your callback function */
// usage
require("./foo.js")(function(err, foo) {
foo.bar('It Works!');
});
/* 3. You can also pass in arguments from the invoking function that can be utilised by the module - e.g the "It Works!" argument */
For anyone who uses ESM modules and top level await, this will just work out of the box without using callbacks to commonJS require or installing any packages like async-require.
// In foo.mjs
await doIOstuffHere();
export foo;
// in bar.mjs
import foo from "./foo.mjs";
foo.bar(); // this function would not run till the async work in foo.mjs is finished