Process files by custom Grunt task - javascript

I am new to grunt... I just tried to implement a custom task (using TypeScript) that shall iterate over a set of given files and do some processing. This is what I have so far...
function gruntFile(grunt: IGrunt): void {
grunt.registerMultiTask("custom", "...", () => {
this.files.forEach(function(next) {
...
});
});
var config: grunt.config.IProjectConfig = {
custom: {
files: [
"folder1/*.json",
"folder2/**/*.json"
]
}
};
grunt.initConfig(config);
}
(module).exports = gruntFile;
Currently I struggle with the configuration and how I can acccess the files array in my custom task handler function. Grunt gives me the error that it cannot read the property forEach of undefined. I also tried a configuration that looks like that...
var config = {
custom: {
files : [
{ src: "folder1/*.json" },
{ src: "folder2/**/*.json" }
]
}
};
Not sure about that, but I have seen that in some tutorials...
I have seen a couple of sample grunt-files already, but in each example the configuration looks a bit different, or files are used in conjunction with imported tasks and modules, so the samples do not show how the configured files are accessed. Any guidance that helps me to better understand how it works (and what I am doing wrong) is appreciated.
Update
I found out that I can query options via the config-property, but I am not sure if this is the right way to do it. In my task-handler I do this to query the list of configured files...
var files = grunt.config.get("custom.files");
...which returns the expected array (but I find it a bit odd to query options via a path expression). I realized that (by using TypeScript) the scope of this is not the context of the current task; that is the reason why files was always undefined. Changing the call to registerMutliTask to...
grunt.registerMultiTask("custom", "...", function() { ... });
...fixed this problem. I use wildcard characters in the path-expression; I was hoping that Grunt can expand those expressions and give me a list of all matching paths. Does this functionality exist, or do I have to create that on my own?

I was able to iterate over the configured files (file pattern) by using the following code...
grunt.registerMultiTask("custom", "...", function() {
grunt.file
.expand(this.data)
.forEach(function(file) {
...
});
});

Related

Return array with fast-csv in Node

I am attempting to parse a large file using the fast-csv library and return its values as an array to a config.js file. Please help as the value of countries in the config's model.exports section ends up being undefined.
Parser:
import csv from 'fast-csv';
export function getCountries() {
let countries = [];
csv.fromPath('./src/config/csv_configs/_country.csv')
.on('data',
function(data) {
countries.push(data);
})
.on('end', function () {
return countries;
});
}
Config:
import {getCountries} from '../tools/csv_parser';
let countryList = [];
module.exports = {
port: process.env.PORT || 8000,
token: '',
countries: getCountryList()
};
function getCountryList() {
if (countryList.length === 0) {
countryList = getCountries();
}
return countryList;
}
I understand this is due to me attempting to return a value from the anonymous function on(), however I do not know the proper approach.
You're correct that returning values from the callback in .on('end' is the source of your problem.
Streams are asynchronous. If you want to use this fast-csv library, you're going to need to return a promise from getCountries(). However, I'm assuming that's not what you want, since you're using the result in a config file, which is synchronous.
Either you need to read your csv synchronously, or you need to refactor the way your application works to be able to have your config be asynchronous. I'm assuming the second option isn't possible.
You probably want to look into using another CSV library that doesn't use streams, and is synchronous. Two examples from a quick Google search are:
https://www.npmjs.com/package/csv-load-sync
https://www.npmjs.com/package/csvsync
I haven't used either of these libraries personally, but it looks like they'd support what you're trying to do. I'm assuming your CSV file is small enough to all be stored in memory at once, if not, you're going to have to explore more complicated options.
As a side note, is there any specific reason that the data has to be in CSV format? It would seem to be much easier to store it in JSON format. JSON can be imported to your config file directly with require; no external libraries needed.

Write custom webpack resolver

I'm planning on using a set of a little bit more sophisticated conventions to import assets in my webpack project. So I'm trying to write a plugin that should rewrite parts of requested module locators and then pass that down the resolver waterfall.
Let's assume we just want to
check if a requested module starts with the # character and
if so, replace that with ./lib/. The new module locator should now be looked up by the default resolver.
This means when a file /var/www/source.js does require("#example"), it should then actually get /var/www/lib/example.js.
So far I've figured out I'm apparently supposed to use the module event hook for this purpose. That's also the way chosen by other answers which unfortunately did not help me too much.
So this is my take on the custom resolve plugin, it's pretty straightforward:
function MyResolver () {}
MyResolver.prototype.apply = function (compiler) {
compiler.plugin('module', function (init, callback) {
// Check if rewrite is necessary
if (init.request.startsWith('#')) {
// Create a new payload
const modified = Object.assign({}, init, {
request: './lib/' + init.request.slice(1)
})
// Continue the waterfall with modified payload
callback(null, modified)
} else {
// Continue the waterfall with original payload
callback(null, init)
}
})
}
However, using this (in resolve.plugins) doesn't work. Running webpack, I get the following error:
ERROR in .
Module build failed: Error: EISDIR: illegal operation on a directory, read
# ./source.js 1:0-30
Apparently, this is not the way to do things. But since I couldn't find much example material out there on the matter, I'm a little bit out of ideas.
To make this easier to reproduce, I've put this exact configuration into a GitHub repo. So if you're interested in helping, you may just fetch it:
git clone https://github.com/Loilo/webpack-custom-resolver.git
Then just run npm install and npm run webpack to see the error.
Update: Note that the plugin architecture changed significantly in webpack 4. The code below will no longer work on current webpack versions.
If you're interested in a webpack 4 compliant version, leave a comment and I'll add it to this answer.
I've found the solution, it was mainly triggered by reading the small doResolve() line in the docs.
The solution was a multiple-step process:
1. Running callback() is not sufficient to continue the waterfall.
To pass the resolving task back to webpack, I needed to replace
callback(null, modified)
with
this.doResolve(
'resolve',
modified,
`Looking up ${modified.request}`,
callback
)
(2. Fix the webpack documentation)
The docs were missing the third parameter (message) of the doResolve() method, resulting in an error when using the code as shown there. That's why I had given up on the doResolve() method when I found it before putting the question up on SO.
I've made a pull request, the docs should be fixed shortly.
3. Don't use Object.assign()
It seems that the original request object (named init in the question) must not be duplicated via Object.assign() to be passed on to the resolver.
Apparently it contains internal information that trick the resolver into looking up the wrong paths.
So this line
const modified = Object.assign({}, init, {
request: './lib/' + init.request.slice(1)
})
needs to be replaced by this:
const modified = {
path: init.path,
request: './lib/' + init.request.slice(1),
query: init.query,
directory: init.directory
}
That's it. To see it a bit clearer, here's the whole MyResolver plugin from above now working with the mentioned modifications:
function MyResolver () {}
MyResolver.prototype.apply = function (compiler) {
compiler.plugin('module', function (init, callback) {
// Check if rewrite is necessary
if (init.request.startsWith('#')) {
// Create a new payload
const modified = {
path: init.path,
request: './lib/' + init.request.slice(1),
query: init.query,
directory: init.directory
}
// Continue the waterfall with modified payload
this.doResolve(
// "resolve" just re-runs the whole resolving of this module,
// but this time with our modified request.
'resolve',
modified,
`Looking up ${modified.request}`,
callback
)
} else {
this.doResolve(
// Using "resolve" here would cause an infinite recursion,
// use an array of the possibilities instead.
[ 'module', 'file', 'directory' ],
modified,
`Looking up ${init.request}`,
callback
)
}
})
}

NodeJS Group Functions Under A Sub-Class

perhaps I have not worded the title correctly. Below is the explanation of what I'm trying to do.
I'm creating a helper.js file for my project. Inside it contains many functions, one I've pasted below. I export this function using module.exports.
function generateInternalError(message, stack_trace) {
if (process.env.NODE_ENV == 'dev') {
console.log({ message: message, stack_trace: stack_trace });
} else {
console.log({message: message});
}
}
module.exports = {
generateInternalError: generateInternalError
};
Where I want to utilize this function I would call:
helper.generateInternalError('Not Found',new Error().stack);
And it works as expected.
But, what I have been tasked with, is creating categories of functions. Essentially I need the following:
helper.errors.generateInternalError('Not Found',new Error().stack);
I can not seem to figure out the right way to export a class of functions or an object of functions in NodeJS such that I don't get an error like:
TypeError: helper.errors.generateClientError is not a function
Any assistance is appreciated.
Thank you
The module.exports property of a file is simply an object that maps names to functions. You can define it arbitrarily, for example:
module.exports = {
errors: {
generateInternalError,
...
},
...
};
Then, require('./helper').errors.generateInternalError will be defined.
helpers is just noise if everything is a helper, so drop that. Just use regular modules and unless you are sure you only have one function in that category export multiple functions. If you only export one function with module.exports then you don't need to do that as a property of an object which also means you can just say const genError=require('./errors')
Don't make something like helpers.errors.someErrorFunc because helpers is noise and you make categories with just separate module files. Don't try to make Node.js look like Java or something equally horrible.
It might be better to structure your helper sub classes in separate files.
Example
src/helpers.js
src/helpers/
src/helpers/errors.js
File helpers.js
module.exports = {
errors: require('./helpers/errors')
}
File helpers/errors.js
module.exports = {
generateInternalError: function(){
//write some internal error code here
}
};
Structuring your code like this will keep your root helpers file very organized and create a pattern that is easy to replicate for new subclasses.
If you prefer a less modular approach you could simply just return one big JSON object as other's have demonstrated...
module.exports = {
errors: {
generateInternalError: function(){
//internal error code
},
generateDatabaseError: function(){
//db error code
}
}
}

GruntJS Configurable First Time Run

I am working on a Angular Demo Application and I want to automatize a lot of things.
It's some sort of a boilerplate, albeit a more complex one, and I want to make a config file in which we'll put API Keys and other stuff, and I want that file to be populated by Grunt with user interaction when the project is started for the first time.
Something like:
grunt build - it should ask the user directly in the console for the API keys, that will be inserted in the config file where I am defining some global constants for the entire App.
Is there such an example of functionality with Grunt ?
You can handle the questioning by using:
https://github.com/dylang/grunt-prompt
It is a nice little plugin that do one job and do it well. It put whatever value you have entered in the command line into variables: (example)
prompt: {
target: {
options: {
questions: [
{
config: 'key', // arbitrary name or config for any other grunt task
type: 'input', // list, checkbox, confirm, input, password
message: 'What is your API key?',
default: '', // default value if nothing is entered
when: function(answers) { return !grunt.file.exists('config.yml'); } // only ask this question when this function returns true
}
]
}
}
}
Then you can use the Grunt.file functions to write those values into files:
http://gruntjs.com/api/grunt.file#grunt.file.write
To orchestrate it, you will need to create a custom task: (example)
grunt.registerTask("my_config_task", function (arg) {
var key = arg || grunt.config('key');
grunt.file.write("config.yml", key);
});
grunt.registerTask('build', ['prompt', 'my_config_task']);
The writing will likely need refinement as you will, I guess, need to replace values and organise as a yml file or json object, etc...
Found one of the possible solutions while looking at the sources of grunt-bump. What are they doing is parsing the config file as a JSON object:
https://github.com/darsain/grunt-bumpup/blob/master/tasks/bumpup.js#L128
Replacing whatever values they need (as JSON) and overwrite the file with the object stringified:
https://github.com/darsain/grunt-bumpup/blob/master/tasks/bumpup.js#153
Seems to work well.

RequireJS dependency override for configurable dependency injection

I'm working with something that seems a perfect fit for DI but it's being added to an existing framework that did not have that in mind when it was written. The config that defines dependencies is coming from a back-end model. Really it's not a full config at this point it basically contains a key that can be used to determine a particular view should be available or not.
I'm using require so the dependency looks something like this
// Dependency
define(['./otherdependencies'], function(Others) {
return {
dep: "I'm a dependency"
};
});
And right now the injector looks something like this
// view/injector
define([
'./abackendmodel',
'./dependency'
], function(Model, Dependency) {
return {
show: function() {
if (model.showDepency) {
var dep = new Dependency();
this.$el.append(dep);
}
}
};
});
This is a far stretch from the actual code but the important part is how require works. Notice in the injector code the dependency is required and used in the show method but only if the model says it should be shown. The problem is that there maybe additional things required by the dependency that aren't available when it shouldn't be shown. So what I'd really like to do is not have to specify that dependency unless the model.showDependency is true. I've come up with a couple of ideas but nothing that I like.
Idea one
Have another async require call based on that model attribute. So the injector would look like this.
// Idea 2 view/injector
define([
'./abackendmodel'
], function(Model) {
var Dep1 = null;
if (model.showDepedency) {
require([
'./dependency'
], function(Dependency) {
Dep1 = Dependency;
});
}
return {
show: function() {
if (Dep1) {
var dep = new Dep1();
this.$el.append(dep);
}
}
};
});
Obviously this has issues. If show is called before the async require call is finished then Dep1 will still be null. So we don't really show the dependency which is a goal and obviously there's JS errors that will be thrown in this case. Also we're still using an if check on show which I don't like but the use case is that a dependency may or may not be present and we just don't want to require it if it's not needed so I might not be able to get around that. Also keep in mind that the model.showDependency is not actually a boolean value. It can be have multiple values which would call for different dependencies to be required. I'm just stripping it down here for simplicity of understanding the basic issue.
Idea two
This is less solidified i.e. I don't think this will even work but I've considered playing with the require config.path stuff. My idea was basically having two configs so that './dependency' pointed to different places. Problem with that is despite what the model.showDependency value is the config is the same require config so can't change that at run-time. Maybe there's some magic that could be done here like having separate view directory path defined and using a factory type object to return the one that we care about but since that would ultimately result in the same async behaviour in Idea one I don't think that buys me anything (it's basically the same).
Idea three
Have the dependency return null base on the model.showDependency attribute.
This might be the best solution right now. I'm still stuck with some ifs but I don't think that's going away. Also this prevent in initialization code from being called.
Any better ideas?
Why not try using a promise, for loading the dependency?
You have two choices, depending on how your code needs to work.
Option 1)
Return a promise for the result of the module 'view/injector', the result of this promise would be the current object result show above.
Option 2)
Use a promise to load the dependency, and then execute the logic once the promise has been resolved.
Below is an example of Option 2, using the jQuery style deferred. I typically prefer
when.js or Q. This example might fall apart if order of appending is important.
// Option 2
define([
'./abackendmodel'
], function(Model) {
var dep1Promise = null;
if (model.showDepedency) {
var dep1Deferred = $.Deferred();
dep1Promise = dep1Deferred.promise();
require([
'./dependency'
], function(Dependency) {
dep1Deferred.resolve(Dependency);
}, dep1Deferred.reject); // Optionally reject the promise if there is an error finding the dependency.
}
return {
show: function() {
if (dep1Promise) {
dep1Promise.then(function(Dep1) {
var dep = new Dep1();
this.$el.append(dep);
});
}
}
};
});

Categories

Resources