I'm building a Node module with devDependencies that should be globally installed, such as jasmine-node and jshint. What I essentially need is to be able to reference their binaries in my makefile / npm scripts section to run tests, lint, etc. In other words I do not wish to require() them programmatically.
After digging around I'm still confused on how to handle this:
1) My first approach was to assume that these modules would be globally installed, clarify this in my module's documentation and reference their binaries as globals - i.e. expect them to be globally available. This conflicts with this piece of advice
Make sure you avoid referencing globally installed binaries. Instead, point it to the local node_modules, which installs the binaries in a hidden .bin directory. Make sure the module (in this case "mocha") is in your package.json under devDependencies, so that the binary is placed there when you run npm install.
(taken from this post)
This generally sounds right, as the aforementioned setup is rather fragile.
2) My next approach was explicitly including those modules in devDependencies (although they are still globally installed on my system (and most probably on users' & contributors' systems as well)). This ensures that appropriate versions of the binaries are there when needed and I can now reference them through node_modules/.bin/.
However I'm now in conflict with this piece of advice
Install it locally if you're going to require() it.
(taken from npm docs)
Regardless of that, I do notice that npm install will now actually fetch nothing (display no network activity) for the globally installed modules.
My questions:
Are the local versions of globally installed modules (that are mentioned in devDependencies) just snapshots (copies) of the global ones, taken during npm install?
Is 2) the correct way to go about doing this? or is there some other practice I'm missing?
Here's my personal take on this, which is decidedly divergent from node.js common practice, but I believe it is an overall superior approach. It is detailed in my own blog post (disclaimer about self-promotion, yada yada) Managing Per-Project Interpreters and the PATH.
It basically boils down to:
Never use npm -g. Never install global modules.
Instead, adjust your PATH to include projectDir/node_modules/.bin instead
Revisiting my own question a couple of years after it was originally written, I feel I can now safely say that the quoted 'advice'
Install it locally if you're going to require() it.
does not stand anymore. (It was part of the npm docs but the posted 2-year old link gives me a 404 at the time of this writing.)
Nowadays, npm run is a fine way to do task management / automation and it'll automatically export modules which are installed locally, into the path before executing. Thus, it makes perfect sense to locally install modules that are not to be require()d such as linters and test-runners. (By the way, this is completely in line with the answer that Peter Lyons provided a couple of years ago - it may have been 'decidedly divergent from node.js common practice' back then, but it's pretty much widely accepted today :))
As for my second question
Are the local versions of globally installed modules (that are mentioned in devDependencies) just snapshots (copies) of the global ones, taken during npm install?
I am pretty confident that the answer is No. (Perhaps the lack of network activity that I was observing back then, during the installation of local modules which were also globally installed was due to caching..?)
Note, Nov 12 2016
The relevant npm docs to which the original question linked have moved here.
Related
Conflicts of a global and a local installation
I am working on the npm commandline tool and package https://github.com/ecma-make/ecmake. I ran into a strange conflict between a globally and a locally installed version of the package.
I can avoid this conflict by linking the one against the library of the other. Then there is only one instance of the library and no conflict. Now I have to think of the user, who does install the package into both places, once globally to be able to run the command without the npx prefix, once locally to have the library listed in the dev section of package.json.
How to reproduce
# prepare test fixture
mkdir ecmakeTest
cd ecmakeTest/
npm init -y
# install globally
npm install -g #ecmake/ecmake#0.3.1
npm ls -g #ecmake/ecmake
# install locally
npm install --save-dev #ecmake/ecmake#0.3.1
npm ls #ecmake/ecmake
# init ecmakeCode.js
npx ecmake --init
# run with local lib => shows the expected behaviour
npx ecmake all
# run with global lib => NoRootTaskError
ecmake all
Origin of the conflict
The stack trace guides us to the line within the global installation: /usr/local/lib/node_modules/#ecmake/ecmake/lib/runner/reader.js:21:13.
if (!(root instanceof Task)) {
throw new Reader.NoRootTaskError(this.makefile);
}
What did happen?
The object root created with the local library was checked against the class definition of the global library. They have the same code but they are different copies of the same code.
The global ecmake runner requires the local makefile ecmakeCode.js. This file in turn requires the Task definition of the local library.
const root = module.exports = require('#ecmake/ecmake').makeRoot();
root.default
.described('defaults to all')
.awaits(root.all);
[...]
We can verify that actually both libraries have been called by putting a logging instruction into both.
How do others solve this?
Gulp and Grunt export a function, that takes the actual dependency by injection. While dependency injection is generally very smart, in this case it's not that pretty. The whole file gets wrapped. I would like to avoid this wrapping function.
See: https://gulpjs.com/docs/en/getting-started/quick-start#create-a-gulpfile
See: https://gruntjs.com/getting-started
What I already considered
The runner could first check, if there is such a conflict. In case it could delegate the arguments given to global ecmake to local npx ecmake by running a child process.
Alas, this would slow down the runner. At least one subprocess is required, maybe more to check the situation.
The question
Do you have a general solution to address this challenge (apart from those I already named with their disadvantages)?
As a first part of my own answer, I do investigate, why a canonical solution is unlikely.
No canonical solution
The makefile creates a data model. The runner loads und inspects this data model and runs the instructions stored inside it. Both, the runner and the data model, need a library. There is a global npm installation and a local one. This already gives four possible combinations.
The ecmake runner provides a --base directory option modelled after the original make tool. The meaning is, "change into the named directory before doing anything". This gives a second place for a local library. We are at three times three combinations.
I will name several strategies of solutions. Combining this all, there are dozens of more or less reasonable options. This makes a canonical answer unlikely. None-the-less, the fundamental challenges of this example generally apply to many command-line tools.
Exact answers
Answers that precisely address the original questions are those strategies, that decouple the model and the runner or take care, that only one instance of the library is called.
Additional constraints
Real life doesn't answer questions that exactly. In case of ecmake the search for solutions brought up additional constraints. I want to make sure, that versions of the model and the runner match the major version, the makefile was coded against. Version conflicts at major version changes should be avoided beforehand.
Hence, the makefile should be bundled with the appropriate version in package.json. If there is only a global installation, it should try to run, but a warning of a missing local installation should be given.
I have a large react app in production and I'm wondering if its best to use fixed versions for my packages? I've heard that using the caret (^) is a good practice, but that seems to me that it would leave the application open to more bugs?
I've googled this issue quite a bit, and there seems to be a split between ^ and fixed versions. Is there a definitive answer somewhere in the (npm) docs on what approach to use?
During development you can choose whichever you're comfortable with, but I would recommend shrinkwrapping just before you begin testing the app, before going into production. Lock down the dependencies with:
npm shrinkwrap
This command repurposes package-lock.json into a publishable npm-shrinkwrap.json or simply creates a new one. The file created and updated by this command will then take precedence over any other existing or future package-lock.json files. For a detailed explanation of the design and purpose of package locks in npm, see npm-package-locks.
That way you can leave the dependencies declared in package.json as they are (tilde/caret), but the exact versions declared in npm-shrinkwrap.json will only ever be used when npm installing.
I've personally had a problem just before going into production, when a dependency declared with ~ (the stricter one) was updated and introduced a bug (which shouldn't happen for a patch/bug fix). It's only ever happened once, but I would't want to tempt fate.
You can always update your npm-shrinkwrap.json by first doing npm update <package_name> specifying the package that needs updating, then re-doing npm shrinkwrap to update the existing npm-shrinkwrap.json.
...and don't forget npm ci
I've been reading up on the npm left-pad fiasco, but I'm somewhat confused by how it happened. I think I have a misunderstanding of how npm actually works. If the developer of left-pad unpublished the package, I assume npm install left-pad wouldn't work anymore. However, for users who had already installed it, won't left-pad still be in the node_modules folder? Wouldn't the developers of say, Babel, have to remove and reinstall left-pad for npm to realize that the package has disappeared? I am clearly missing something, but I'm not sure what.
When I run npm install babel, left-pad is not bundled in babel but rather is expressed as dependency in it's package.json file. So npm then has to go find left-pad and download it as well. So if you were installing left-pad or anything using left-pad for the first time, you wouldn't be able to. While this means you're safe if it already exists in your local directory, the project would fail to build properly as soon as it is built somewhere else. For example, a CI server that does a clean build from scratch for each new changeset would fail to build any project that relies on left-pad. Or if you were checking out a project for the first time, or deploying it to a new server, you wouldn't be able to build.
This is simple to fix if you were relying on left-pad directly. Just write a replacement and update your code to use the replacement. But when it's required deep in your dependency tree, say by Babel, it's unlikely you can refactor Babel or other modules on your own to use a left-pad replacement. You'd have to wait for all of the various node module developers to update their modules with something else and republish.
It's not as apocalyptic as news articles made it sounds, but it is a huge inconvenience and throws a wrench in many systems outside of developer workspaces where left-pad was already cached.
As #Lazar said, you understood correctly.
The problem come in that, if Babel is relying on left-pad, and am trying to install Babel, it will fail.
Well, I could always rewrite it myself as a workaround.
But if it is a module used by a module used by a module used by... used by Babel, or more module, you face a real nightmare, because Babel can't do anything, nor can you, and you are forced to wait that every single module develloper relying on left-pad update their code.
Is it possible to define different locations in your NPM package for browser and for server (NodeJS)?
My code is largely isomorphic, but it is uglified and concatenated for browsers.
Short answer, is that you can't do such thing. Mostly your dependencies are stored under /node_modules folder.
You may override this option by running some patches or an installer script.
Here is a bug raised on Github about this issue.It's is also described in an official NPM blogpost
But don't feel disappointed, you may use Bower as a dependency injector for your client side code. I prefer it as it feels more semantically and separated:
Bower for the front end, NPM for the back end.
Moreover, NPM packages are built for CommonJs only, Bower packages instead are more plug and play solutions
I use Node.js (via browserify) for each of my web apps, all of which have some dependencies in common and others specific to themselves. Each of these apps has a package.json file that specifies which versions of which modules it needs.
Right now, I have a /node_modules directory in the parent folder of my apps for modules that they all need to reference, and then I put app-specific modules in a node_modules folder in that app's directory. This works fine in the short term, since my require() statements are able to keep looking upward in the file structure until they find the node_modules directory with the correct app in it.
Where this gets tricky is when I want to go back to an old project and run npm install to make sure it can still find all the dependencies it needs. (Who knows what funny-business has occurred since then at the parent directory level.) I was under the impression that npm install did this:
for each module listed in package.json, first check if it's present, moving up the directory the same way require does. If it's not, install it to the local node_modules directory (creating that directory if necessary).
When I run npm install inside an app folder, however, it appears to install everything locally regardless of where else it may exist upstream. Is that the correct behavior? (It's possible there's another reason, like bad version language in my package.json). If this IS the correct behavior, is there a way for me to have npm install behave like the above?
It's not a big deal to widely replicate the modules inside every app, but it feels messy and prevents me from make small improvements to the common modules and not having to update every old package.json file. Of course, this could be a good thing...
When I run npm install inside an app folder, however, it appears to install everything locally regardless of where else it may exist upstream. Is that the correct behavior? (It's possible there's another reason, like bad version language in my package.json). If this IS the correct behavior, is there a way for me to have npm install behave like the above?
Yes, that is what npm install does. In node.js code, the require algorithm has a particular sequence of places it looks, including walking up the filesystem. However, npm install doesn't do that. It just installs in place. The algorithms it uses are all constrained to just a single node_modules directory under your current directory and it won't touch anything above that (except for with -g).
It's not a big deal to widely replicate the modules inside every app, but it feels messy and prevents me from make small improvements to the common modules and not having to update every old package.json file. Of course, this could be a good thing...
Yeah basically you're doing it wrong. The regular workflow scales well to the Internet. For your use case it creates some extra tedious work, but you can also just use semantic versioning as intended and specify "mylib": "^1.0.0" in your package.json for your apps and be OK with automatically getting newer versions next time you npm install.