I'm planning on using a set of a little bit more sophisticated conventions to import assets in my webpack project. So I'm trying to write a plugin that should rewrite parts of requested module locators and then pass that down the resolver waterfall.
Let's assume we just want to
check if a requested module starts with the # character and
if so, replace that with ./lib/. The new module locator should now be looked up by the default resolver.
This means when a file /var/www/source.js does require("#example"), it should then actually get /var/www/lib/example.js.
So far I've figured out I'm apparently supposed to use the module event hook for this purpose. That's also the way chosen by other answers which unfortunately did not help me too much.
So this is my take on the custom resolve plugin, it's pretty straightforward:
function MyResolver () {}
MyResolver.prototype.apply = function (compiler) {
compiler.plugin('module', function (init, callback) {
// Check if rewrite is necessary
if (init.request.startsWith('#')) {
// Create a new payload
const modified = Object.assign({}, init, {
request: './lib/' + init.request.slice(1)
})
// Continue the waterfall with modified payload
callback(null, modified)
} else {
// Continue the waterfall with original payload
callback(null, init)
}
})
}
However, using this (in resolve.plugins) doesn't work. Running webpack, I get the following error:
ERROR in .
Module build failed: Error: EISDIR: illegal operation on a directory, read
# ./source.js 1:0-30
Apparently, this is not the way to do things. But since I couldn't find much example material out there on the matter, I'm a little bit out of ideas.
To make this easier to reproduce, I've put this exact configuration into a GitHub repo. So if you're interested in helping, you may just fetch it:
git clone https://github.com/Loilo/webpack-custom-resolver.git
Then just run npm install and npm run webpack to see the error.
Update: Note that the plugin architecture changed significantly in webpack 4. The code below will no longer work on current webpack versions.
If you're interested in a webpack 4 compliant version, leave a comment and I'll add it to this answer.
I've found the solution, it was mainly triggered by reading the small doResolve() line in the docs.
The solution was a multiple-step process:
1. Running callback() is not sufficient to continue the waterfall.
To pass the resolving task back to webpack, I needed to replace
callback(null, modified)
with
this.doResolve(
'resolve',
modified,
`Looking up ${modified.request}`,
callback
)
(2. Fix the webpack documentation)
The docs were missing the third parameter (message) of the doResolve() method, resulting in an error when using the code as shown there. That's why I had given up on the doResolve() method when I found it before putting the question up on SO.
I've made a pull request, the docs should be fixed shortly.
3. Don't use Object.assign()
It seems that the original request object (named init in the question) must not be duplicated via Object.assign() to be passed on to the resolver.
Apparently it contains internal information that trick the resolver into looking up the wrong paths.
So this line
const modified = Object.assign({}, init, {
request: './lib/' + init.request.slice(1)
})
needs to be replaced by this:
const modified = {
path: init.path,
request: './lib/' + init.request.slice(1),
query: init.query,
directory: init.directory
}
That's it. To see it a bit clearer, here's the whole MyResolver plugin from above now working with the mentioned modifications:
function MyResolver () {}
MyResolver.prototype.apply = function (compiler) {
compiler.plugin('module', function (init, callback) {
// Check if rewrite is necessary
if (init.request.startsWith('#')) {
// Create a new payload
const modified = {
path: init.path,
request: './lib/' + init.request.slice(1),
query: init.query,
directory: init.directory
}
// Continue the waterfall with modified payload
this.doResolve(
// "resolve" just re-runs the whole resolving of this module,
// but this time with our modified request.
'resolve',
modified,
`Looking up ${modified.request}`,
callback
)
} else {
this.doResolve(
// Using "resolve" here would cause an infinite recursion,
// use an array of the possibilities instead.
[ 'module', 'file', 'directory' ],
modified,
`Looking up ${init.request}`,
callback
)
}
})
}
Related
I have a protractor-cucumber framework whose step definitions are somewhat structured as per this: https://github.com/cucumber/cucumber-js/blob/master/docs/support_files/step_definitions.md
I use a return and chain the promises together. Recently, I came across a different syntax called the async function. But, when I try to convert my step definitions to async, all the help files in the framework where I use say module.exports and require() display the following warning:
[ts] File is a CommonJS module; it may be converted to an ES6 module.
When I run test cases since I can't access these helper files due to the error my tests cases fail. Like, my page object files, I am not able to access them from my tests. I think they don't get exported like they used to.
Could someone please advice me as to how I can change my test cases to async syntax without breaking them? How do I resolve the above issue without disrupting my tests in a major way.
Adding code
Here is a step from my step definition before the change
let { Given, Then, When } = require('cucumber');
Given(/^I am on the "([^"]*)" page$/, function (home) {
home = this.url.FDI_HOME;
return browser.get(home);
});
Here is a step definition, after I change it to an async function
let { Given, Then, When } = require('cucumber');
Given(/^I am on the "([^"]*)" page$/, async function (home) {
home = this.url.HOME
await browser.get(home);
});
And I will change my other steps in similar fashion. Problem arises when I try to run the above step it fails saying that it is not able to access this.url.HOME. I have another file to supply URLs called the urls.js looks something like this
let targetStore = browser.params.store || 'bestbuy';
let FDI_HOST = browser.params.fdi;
module.exports = {
HOME Page: 'https://homepage.com',
Shop_Page: 'https://shop.com',
storeLink: `http://www.${targetStore}.com`,
};
I see three dots under the word "module.exports" in VS code and when I hover over it, it displays an error saying: [ts] File is a CommonJS module; it may be converted to an ES6 module.
I have tried to find a resolution to this but not been able to successfully make it. if I use the syntax as "async()=>{}" the test cases fails but when I use "async function(){}" then a few of the steps pass but not the other.
These are suggestions/hints. They visually indicate that vscode can perform an action to possibly refactor/improve your code, but they are not treated as errors.
You can disable them by adding "javascript.suggestionActions.enabled": false to your user/workspace settings.
Source: https://github.com/Microsoft/vscode/issues/47299
With RequireJS on the front-end, we can listen to see when modules get loaded into the runtime module cache using:
requirejs.onResourceLoad = function (context, map, depArray) {
console.log('onResourceLoad>>>', 'map.id:', map.id, 'context:', context);
};
Can we do this with Node.js somehow? Will be useful for debugging. Especially when servers are loading different files (or in different order) based on configuration.
I assume this might be documented in
https://nodejs.org/api/modules.html
but I am not seeing anything
If you look at the source code for require(), you will find this:
Module._load = function(request, parent, isMain) {
if (parent) {
debug('Module._load REQUEST %s parent: %s', request, parent.id);
}
This shows that you can leverage the debug() call to get the information you need. In order to do this, you will notice that module is setup using util.debuglog('module'). This means that you need to run your application with with the NODE_DEBUG variable set to module. You can do it the following way from the console:
NODE_DEBUG=module node main.js
This will log what you are looking for.
I'm not aware of a documented callback API for the purpose of module load callbacks (although a logging mechanism for module loading appears to exist).
Here's a quick workaround to the apparent lack of such a callback, by monkeypatching Module._load:
const Module = require('module');
const originalModuleLoad = Module._load;
Module._load = function() {
originalModuleLoad.apply(this, arguments);
console.log("Loaded with arguments " + JSON.stringify(arguments));
}
I executed the above code in a REPL and then did require('assert'). Lo and behold:
> require('assert')
Loading with arguments {"0":"assert","1":{"id":"<repl>","exports":{},"filename":null,"loaded":false,"children":[],"paths":["/Users/mz2/Projects/manuscripts-endnote-promo/repl/node_modules","/Users/mz2/Projects/manuscripts-endnote-promo/node_modules","/Users/mz2/Projects/node_modules","/Users/mz2/node_modules","/Users/node_modules","/Users/mz2/.nvm-fish/v6.1.0/lib/node_modules","/Users/mz2/.node_modules","/Users/mz2/.node_libraries","/Users/mz2/.nvm-fish/v6.1.0/lib/node"]},"2":false}
Please don't think about using code like above for anything but debug only purposes.
Because node.js modules are imported (required) synchronously, simply having the require statement means the module is imported.
While RequireJS can import modules asynchronously, the even listening is an important feature, but native require in Node.js leaves this necessity out. This way, as you probably know:
const module = require('module')
// You can use the module here, async or sync.
To add to that, not only require is sync, but also in order to use a module it has to be explicitly required in the same file where it's used. This can be bypassed in several ways, but best practice is to require in every module where you use a module.
For specific modules which require async initialization, either the module should provide an event, or you can wrap the init function using a promise or a callback. For example, using a promise:
const module = require('module')
// Create a promise to initialize the module inside it:
const initialized = new Promise((resolve, reject) => {
// Init module inside the promise:
module.init((error) => {
if(error){
return reject(error)
}
// Resolve will indicate successful init:
resolve()
})
})
// Now with wrapped init, proceed when done:
initialized
.then(() => {
// Module is initialized, do what you need.
})
.catch(error => {
// Handle init error.
})
I am new to grunt... I just tried to implement a custom task (using TypeScript) that shall iterate over a set of given files and do some processing. This is what I have so far...
function gruntFile(grunt: IGrunt): void {
grunt.registerMultiTask("custom", "...", () => {
this.files.forEach(function(next) {
...
});
});
var config: grunt.config.IProjectConfig = {
custom: {
files: [
"folder1/*.json",
"folder2/**/*.json"
]
}
};
grunt.initConfig(config);
}
(module).exports = gruntFile;
Currently I struggle with the configuration and how I can acccess the files array in my custom task handler function. Grunt gives me the error that it cannot read the property forEach of undefined. I also tried a configuration that looks like that...
var config = {
custom: {
files : [
{ src: "folder1/*.json" },
{ src: "folder2/**/*.json" }
]
}
};
Not sure about that, but I have seen that in some tutorials...
I have seen a couple of sample grunt-files already, but in each example the configuration looks a bit different, or files are used in conjunction with imported tasks and modules, so the samples do not show how the configured files are accessed. Any guidance that helps me to better understand how it works (and what I am doing wrong) is appreciated.
Update
I found out that I can query options via the config-property, but I am not sure if this is the right way to do it. In my task-handler I do this to query the list of configured files...
var files = grunt.config.get("custom.files");
...which returns the expected array (but I find it a bit odd to query options via a path expression). I realized that (by using TypeScript) the scope of this is not the context of the current task; that is the reason why files was always undefined. Changing the call to registerMutliTask to...
grunt.registerMultiTask("custom", "...", function() { ... });
...fixed this problem. I use wildcard characters in the path-expression; I was hoping that Grunt can expand those expressions and give me a list of all matching paths. Does this functionality exist, or do I have to create that on my own?
I was able to iterate over the configured files (file pattern) by using the following code...
grunt.registerMultiTask("custom", "...", function() {
grunt.file
.expand(this.data)
.forEach(function(file) {
...
});
});
I'm pretty new to node.js and it seems fairly easy to use but when it comes to getting a value using the command line and returning that value to be used in another package or .js, it seems harder than I expected.
Long story short, I've used a npm package (akamai-ccu-purge), to enter a file to purge on the akamai network successfully.
I want to make it more dynamic though by prompting the user to enter the file they want purged and then using that in the akamai package.
After making a few tries using var stdin = process.openStdin(); I actually found another npm package called Prompt that seemed to be easier. Both ways seem to have the same problem though.
Node doesn't seem to want to stop for the input. It seems to want to automatically make the purge without waiting for input even though I've called that module first. It actually gets to the point where I should enter the file but it doesn't wait.
I am definitely missing something in my understanding or usage here, what am I doing wrong?
My code so far is:
var purgeUrl = require('./getUrl2');
var PurgerFactory = require('../../node_modules/akamai-ccu-purge/index'); // this is the directory where the index.js of the node module was installed
// area where I placed the authentication tokens
var config = {
clientToken: //my tokens and secrets akamai requires
};
// area where urls are placed. More than one can be listed with comma separated values
var objects = [
purgeUrl // I am trying to pull this from the getUrl2 module
];
// Go for it!
var Purger = PurgerFactory.create(config);
Purger.purgeObjects(objects, function(err, res) {
console.log('------------------------');
console.log('Purge Result:', res.body);
console.log('------------------------');
Purger.checkPurgeStatus(res.body.progressUri, function(err, res) {
console.log('Purge Status', res.body);
console.log('------------------------');
Purger.checkQueueLength(function(err, res) {
console.log('Queue Length', res.body);
console.log('------------------------');
});
});
});
The getUrl2 module looks like this:
var prompt = require('../../node_modules/prompt');
//
// Start the prompt
//
prompt.start();
//
// Get property from the user
//
prompt.get(['newUrl'], function (err, result) {
//
// Log the results.
//
console.log('Command-line input received:');
console.log(' http://example.com/custom/' + result.newUrl);
var purgeUrl = 'http://example.com/custom/' + result.newUrl;
console.log(purgeUrl);
module.exports = purgeUrl;
});
Thanks again for the help!
I would probably just allow getURL2 to expose a method that will be invoked in the main module. For example:
// getURL2
var prompt = require('../../node_modules/prompt');
module.exports = {
start: function(callback) {
prompt.start();
prompt.get(['newUrl'], function (err, result) {
// the callback is defined in your main module
return callback('http://example.com/custom/' + result.newUrl);
});
}
}
Then in your main module:
require('./getUrl2').start(function(purgeURL) {
// do stuff with the purgeURL defined in the other module
});
The implementation may differ, but conceptually, you need to make your second module, which requires some sort of input from the user, happen as a result of that input. Callbacks are a common way to do this (as are Promises). However, as prompt is not necessarily exposing a method that would necessitate a Promise, you can do it with plain old callbacks.
You might also want to search around for articles on writing command line tools (sometimes referenced as CLIs) or command line apps with Node. I found the following article to be helpful when trying to figure this out myself:
http://javascriptplayground.com/blog/2015/03/node-command-line-tool/
Also, the command-line-args module worked well for me (though there's a number of other modules out there to choose from):
https://www.npmjs.com/package/command-line-args
Good luck!
Inside a yeoman generator I am trying to do a conditional copy depending on the state of an external network resource. My problem is that the yeoman copy command (src.copy and template too for that matter) does not seem to do anything when invoked inside of an async callback, such as one from a http request.
Example code, inside of the yeoman.generators.NamedBase.extend block:
main: function(){
//-> here this.copy('inlocation','outlocation') works as expected
var that = this;
var appName = ...
var url = ...
var req = http.request(url, function(res){
//-> here that.copy('inlocation','outlocation') DOES NOT work
res.on('data', function (data) {
//console.log('Response received, onData event');
//-> here that.copy('inlocation','outlocation') DOES NOT work
});
//-> here that.copy('inlocation','outlocation') DOES NOT work
});
req.on('error',function(error){
//...
});
req.end();
//-> here this.copy('inlocation','outlocation') works as expected, once again
Note the locations marked by '//-->' comments for points of reference - when it works, it works as expected. When it doesn't, there's no output on console whatsoever (so that.copy seems to exist as a function, in fact I can assert that typeof that.copy === 'function' !), no error messages, just no file created (the usual file create message is missing too which is a characteristic of the properly working command).
Using call or apply to pass an explicit this reference to the functions didnt change the behaviour, nor did binding this to the async functions.
What is the explanation to this behaviour, and how can I make copy calls in this async manner?
As per Eric MORAND's comment, I'll post the solution I found as a separate answer, instead of an edit to the original post, hopefully it'll be easier to find:
I've found a solution, using the async() function of the yeoman RunContext. (see the api docs here) The following line at the beginning of the async code:
var done = this.async();
then a call to done() right before I wanted to run copy made it behave as originally expected.