For the life of me I can't get the clean task to skip certain files.
clean: {
empty_dist: ['./dist/**/*', '!./dist/home/config.js']
}
When I run this, my dist/ dir gets completely emptied, including dist/home/config.js which I wanted to skip.
I am running it with grunt clean:empty_dist using version ^0.5.0
The docs are here.
Also, as an aside, what is the difference between '**/*' and '*'
Looks like dist/home is (itself) being deleted. Perhaps you should change your initial match to ./dist/**/*.js (or escape !./dist/home).
Related
I have a fairly simple webpack set up with a bit of a twist. I have a few different ways I can think of to create my intended behavior, but I'm wondering if there are better approaches (I'm still fairly new to webpack).
I have a basic TypeScript/Scss application and all of the src files exist in a src directory. I also have a component library bundle that's dynamically generated through a separate build process (triggered via Node). This bundle also ends up in the src directory (it contains some sass variables and a few other assets that belong in src). This is src/custom-library-bundle. My current webpack set up moves the appropriate bundle files to a public (dist) directory via the CopyWebpackPlugin. My webpack dev server watches for changes and reloads as expected. This all works beautifully.
Now for the twist. I've set up a custom watcher that exists elsewhere to watch for the custom component library bundle changes. When a change occurs there, it triggers that custom build process mentioned above. This (as expected) modifies the src/custom-library-bundle directory and sets off several watches/reloads as the bundle is populated. Technically it works? And the behavior is expected, but ideally, I could tell it to wait until the custom installation work is "done" and only then have it trigger the compilation/reload.
Phew. This isn't an easy one to explain. Here's a webpack config that will (hopefully) help clarify:
devServer: {
contentBase: path.join(__dirname, 'public'),
port: 9000,
host: '127.0.0.1',
after: (app, server) => {
new MyCustomWatcherForOtherThings().watch(() => {
// invoked after the src/custom-library-bundle is done doing its thing (each time)
// now that I know it's done, I want to trigger the normal compilation/reload
})
},
watchOptions: {
ignored: [
/node_modules/
// I've been experimenting with the ignored feature a bit
// path.resolve(__dirname, 'src/custom-library-bundle/')
]
}
}
Ideal Approach: In my most ideal scenario, I want to just manually trigger webpack dev server to do its thing in my custom watch callback; have it ignore the src/custom-library-bundle until I tell it to pay attention. I can't seem to find a way to do this, however. Is this possible?
Alternate Approach #1: I could ignore the src/custom-library-bundle directory, move the updated files to public (not utilizing the webpack approach), and then trigger just a reload when I know that's complete. I don't love this because I want to utilize the same process whether I'm watching or just doing a one-off build (I want everything to end up in the public directory because webpack did that work, not becuase I wrote a script to put it there under specific scenarios). But, let's say I get over that, how can I trigger a browser reload for the dev server? Is this possible?
Alternate Approach #2 This is the one I'm leaning towards but it feels like extra, unnecessary work. I could have my custom build process output to a different directory (one that my webpack set up doesn't care about at all). Once the build process is complete, I could move all the files to src/custom-library-bundle where the watch would pick up 1 change and do a single complication/reload. This gets me so close, but feels like I'm adding a step I don't want to.
Alternate Approach #3? Can you think of a better way to solve this?
Update (including versions):
webpack 4.26.1
webpack-dev-server 3.1.14
Add the following server.sockWrite call to your after method:
devServer: {
after: (app, server) => {
new MyCustomWatcherForOtherThings().watch(() => {
// invoked after the src/custom-library-bundle is done doing its thing (each time)
// now that I know it's done, I want to trigger the normal compilation/reload
// calling this on `server` triggers a full page refresh
server.sockWrite(server.sockets, "content-changed");
});
};
}
I've never found this in the documentation, but one of webpack's core devs mentioned it in a comment on GitHub, so it's sort of sanctioned.
Useful things that webpack provides that come to mind are multi-compiler builds, creating a child compiler from a Compiler or Compilation instance, the DllPlugin and programmatically managing the compiler by calling webpack.watch() or webpack.compile().
the multi-compiler setup is useful when you want to build several compilations in a single run whose outputs are independent of each other
the child compilers allows you to set up dependencies between
compilers and using hooks they allow you to block the parent until,
say, the child compilation finished emitting the latest bundle into
the assets
the DllPlugin allows you to create a compilation and
it's manifest that could produce chunks that can contain modules that
can be used as dependents for yet unbuilt compilations (the manifest
needs to be produced and passed to them beforehand)
programmatically
managing you compiler lets you write a straightforward Node.js script
that could accomplish most of this manually.
If I understood correctly, your webpack compiler doesn't really have any dependencies on your bundle aside from expecting it to have something added to the output assets, so you might consider doing a multi-compiler build. If that doesn't work for you, you can write a simple plugin that creates a child compiler that watches all the component bundle dependencies and emits the built bundle into the main compilation assets on changes. Ultimately, as you mentioned yourself, you could write a simple script and programmatically weave everything together. All of these approaches offload tracking build dependencies to webpack, so if you put the compilers into watch mode webpack will keep track when something needs updating.
If you're interested in taking a closer look at how child compilers are created and used, I would heartily recommend reading through the sources of the html-webpack-plugin plugin. It doesn't seem like the plugin is handling the same kind of build setup as you have, but worth noting is that the HTML plugin works with files that aren't part of the build dependencies (nothing in the sources references or depends on the files/templates that are used for creating the HTML files that are added to the output).
I want to do something like:
var dynamicRequire = require.context('./', true);
console.log(dynamicRequire.keys());
dynamicRequire('react/foo/bar');
But the console.log only shows files from the local directory, not the npm packages. When webpack builds i can see it get included as number 234 but the mapping from that path to that number is lost. How can I accomplish this? Thanks!
I'm not sure, what your problem exactly is here, but I suspect, that for your purpose you have just forgotten to provide require.context with second argument, which is a flag, that decides, whether the webpack should look into your subfolders and pick your files there also. So you could use require.context('./', true, [some regexp maybe ?]).
Let me know if that's not addressing your problem.
I'm really new to Grunt.js, and have had some luck being able to run some of the tasks I've installed (e.g. watch, uglify, jslint). As I try to run more, I often run into issues and try to google/research as much as I can in order to learn from the ground up how Grunt works.
However, I get confused by different configurations like these two for uglify:
From the GitHub Repo for grunt-contrib-uglify
uglify: {
my_target: {
files: {
'dest/output.min.js': ['src/input1.js', 'src/input2.js']
}
}
}
and this one (which works for me in my Gruntfile.js):
uglify: {
build: {
src: 'js/custom-script.js',
dest: 'js/custom-script.min.js'
}
},
It's not really these in particular, but I notice that each uses its own words (my-target versus build, src, dest), structure, syntax, etc. I think that, since Grunt is all javascript, that these would be all in JSON formats, though I wasn't able to verify if they were or not.
After doing lots of research through the Grunt documentation, going through the GitHub repositories containing the plugins, and random various tutorials, I guess I have some main questions:
Is there a standardized way to write a Gruntfile.js?
Are there any reserved words for Gruntfile.js's? I have tried changing in my uglify task the word dest to gibberish, and it did fail, so my gut on this say yes.
If yes to any of the above two questions, where are these resources/links? I tried to google "grunt glossary" but came up empty. The only standard seems to be the one that Grunt itself supplies, but I'm having a hard time getting things to work by referencing only it.
There are multiple things at work here, and not all config are created equal. The reference doc is http://gruntjs.com/configuring-tasks but here's a summary:
most grunt tasks are what is called "multi-targets", meaning in your build process you might call the task multiple task, with different parameters. In the config, your first level is the name of the target, and it is completely free (except for options, see below). In your examples, these are the build and my_target names.
besides these targets, you may have an options field (reserved keyword) that is passed to all targets
in the targets themselves, grunt provides some reserved keywords, for options (options) and to define files (src, dest, files,... see http://gruntjs.com/configuring-tasks#files)
and the task author is free to define its own keys, so the doc of each task is very important.
I have more than javascript files in my html documents as external which I'd like to combine on account of not to be crowded. is there any way to combine my js files ? for example;
my files:
a.js
b.js
c.js
d.js
and i want;
all.js
Take a look at requirejs.org and especially look at r.js (http://requirejs.org/docs/optimization.html)
a.js
var i=0;
function fun1()
{
...
}
b.js
var k=0;
function fun2()
{
...
}
all.js just copy, paste like css
var i=0;
function fun1()
{
...
}
var k=0;
function fun2()
{
...
}
take care of semicolons and closed braces when you purticularly write whole script in an eventlistener, especially 'DOMContentLoaded'
document.addEventlistener('DOMContentloaed',function()
{
//whole big script
}
);
instead use
document.addEventlistener('DOMContentloaed',some_function);
var some_function = function(){*bla bla bla*};
A simple bash cat operation will do what you want, but, at some stage you're probably going to want more right?
Grunt and Grunt-contrib-concat is a good starter, but you'll quickly realise grunt is not particularly good. To summarise usage, you create a gruntfile, install a few dependencies (i.e. install grunt and its command line interface, this is easy) and run grunt from your project root. It then parses the gruntfile to find out what you want it to do, and it does it. Pretty simple, and simple is good.
Next up is Gulp, which is a nice build system using streams, so, slightly more complex (well, easier and more powerful but, streams can be kind-of confusing at first). Gulp works in the same way only it parses a gulpfile for instructions. For a concat operation the actual gulp command is trivial:
gulp.src( '*.js' )
.pipe( gulp.dest( 'all.js' )
Between the .src and the .dest you can pipe the files through multiple transforms, such as minifying, transpiling, notifying—the list of plugins and modules is dizzying (as it is for grunt).
However, if you’re a fan of node and npm (you probably should be) then you can use npm scripts to create a build system. npm is the node package manager and requires a package.json to give some clues as to how to work. Part of that json specification is a scripts block
"scripts": {
"build" : "cat *.js > all.js"
}
You can then use npm run build from the command line, whereby npm will parse the package.json and execute the script using bash (sh actually).
Note that these are build systems, and there are many others.
There are also other packagers (which you would probably use as part of your build system, although for some projects they are you entire build system) but they are more complex than your needs, for your own research browserify, webpack and jspm are all excellent (bare in mind AMD modules lost so require.js is probably not worth your time), although this area is becoming congested. Each of these are very powerful modularisation tools, but they will require some changes to how you structure your code. If you are serious about modularisation then they are worth your time learning.
On a slightly different tangent, there is some discussion about whether one large file is actually more beneficial than a number of smaller scripts. In many cases simply serving a few small files is actually quicker, and may be easier, although there can be other benefits of smashing code together. Currently it is probably still best to concat at least into less HTTP requests, but this requirement for performance is going away.
It might be helpful : https://github.com/mrclay/minify
OR
create file all.js and paste a.js,b.js,c.js,d.js file code
Can package require itself and its subsystems?
For instance there is module:
src/deep/path/to/module.js
which need to require
src/another/module.js
Instead of:
require('./../../../another/module.js');
Can one just:
require('<self>/another/module.js');
?
For instance this might be useful in testing: test unit can reference its test object without long up-and-down-style path.
I have two considerations (but they do not satisfies this issue completely):
If package is already in node_modules folder it can reference to itself by its
canonical name (that in its package.json).
Package can create symlink to itself in its own node_modules folder (sic!). Haven't try it yet, possible will lead to infinite loop in some resolving cases.
Solution 1
Split it to different sub-projects, put each one into different folders. As an example:
sub.project.1/
sub.project.2/
in sub.project.1/
# cd ../sub.project.1
npm link
# cd ../sub.project.2
npm link sub.project.1
Then in sub.project.2 you can do it simply:
var something = require("sub.project.1")
This can remove the '../../...' relative path.
Note: it can be done in same folder/project, by doing this,
in the sub folders, the project self can be easily referred.
For example when both sub.project.1 and sub.project.2 replaced by my.project. And of course, all the names should be the name in package.json initialized by npm.
Solution 2
Create a link in the folder node_modules/
# cd node_modules
ln -s .. myProjectRootDir
#
# where: .. : means parent directory in linux shell
# '#' means comments in linux shell
#
Then it can be used under same level directory trees:
var something = require("myProjectRootDir/path/to/js/file")
Thus the "../../..." can change to path read more easily.
And if myProjectRootDir happens to be the project name and
package name in package.json, it's ok.
Solution 3
There are npm packages: require-self, install-self,
they do the similar things.
Solution 4
Write a new .js file where it's easy to require,
then put all the annoying relative references into it.
For example write it at node_modules/mymods.js
// mymod.js
module.exports.mod_one = require("../path/to/mod/one.js");
//...
Then require('mymod') can gives all the other modules.
This is not the best solution, all references of requiring
need to be doing by hand. But it's a single file, so it's
manageable, and centralize the references for future deployments.
One of the cons of this solution is, if you put it in
node_modules/ folder and this folder is ignored in git repository, you need to take care when pushing or branching the git repo.
When deleting the node_modules folder, the file can also be deleted by accidents.
There could be more solutions I don't know.
Well you can shorten it a little bit. You don't need the leading ./ in this case
require("../../../another/module.js");
And a little further by removing the trailing .js
require("../../../another/module");
Another answer is suggesting the use of process.cwd() but be very careful with this. Your require calls will only work if the app is initialized from the same directory.
From the sounds of it though, 4 directories is already pretty deep. You might want to considering fragmenting your large project into smaller, single-purpose modules. We would need more information on the project to know if that was the right decision though.
I often use process.cwd() to make things like this a little more manageable. This returns where the node application is actually running from and lets you create the path in a little cleaner fashion.
Something maybe like var x = require(process.cwd() + '/lib/module')
Without seeing exactly what you're trying to do; I'm not sure if this will be helpful, but you can do things like var connect = require('express/connect') as well. Basically you can pass an installed local module, and then create paths off of it as well.