Is it worth the trouble to commit package-lock.json? [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a NodeJS project published on GitHub that uses a few NPM modules, as specified in my package.json. I have my package-lock.json committed into the repo.
Recently I got notices on my repository about a recently-discovered security vulnerability in one of my dependencies. Upon further inspection, it wasn't one of my direct dependencies that had a vulnerability but rather a module that one of my dependencies is dependent on. Because all the modules show up in my package-lock.json, the notice comes up telling me to update that dependency to the latest version.
- myproject
- someDependency
- anotherDependency
- aSubDependency
- anotherOne <--- this one has a security issue
So now I have to question: Is it worth committing a package-lock.json? I wouldn't have any security vulnerabilities in my project if I didn't have a package-lock.json. Now, I am forced to update my project and republish simply to update the package-lock.json. If that file wasn't there at all, the problem would fix itself because anyone who does an install or update of my project using ONLY the package.json would automatically get the updated dependency from up the stream.
Think about it like this. Bob creates moduleA. Then someone else creates moduleB that is dependent on moduleA. Then 1000 developers out in the world create various projects that directly are dependent on moduleB. If Bob discovers a security vulnerability in moduleA, now 1000 people have to make an update to their 1000 projects just to fix this all because they were committing their package-lock.json.
So it is worth it? Do the advantages of package-lock.json outweigh the drawbacks in this topic?

Yes, it worth
This file is intended to be committed into source repositories, and
serves various purposes:
Describe a single representation of a dependency tree such that
teammates, deployments, and continuous integration are guaranteed to
install exactly the same dependencies.
Provide a facility for users to “time-travel” to previous states of
node_modules without having to commit the directory itself.
To facilitate greater visibility of tree changes through readable
source control diffs.
And optimize the installation process by allowing npm to skip repeated
metadata resolutions for previously-installed packages.
See npm documentation
See GitHub - "Viewing and updating vulnerable dependencies in your repository"

Related

Github prevent commit changes if number of lines is greather than x [duplicate]

This question already has answers here:
Git pre-push hooks
(7 answers)
Closed 1 year ago.
We have implemented a coding standard for the company for nodejs backend projects and React js front end projects where a maximum number of lines for a file should be less than 250. We want to prevent developers commit files where the lines count is greater than 250.
How do we automate this other than manually review commits?
Eslint has a rule that enforces a maximum number of lines per file. If you include this in your Eslint coding standards, it will fail if your files exceed this limit. You can combine this with a pre-commit Git hook to run the coding standard on each commit.
Unfortunately Git hooks aren't considered part of the repository and therefore aren't recreated when the repository is cloned. It's a good idea to also have ESLint run in a continuous integration environment so any breaches of the standard are caught when someone pushes too.
On the "client/developer" side you use husky to setup pre-commit hooks to run the check.
Eslint has a max-lines rule that you can use.
For performance reasons you may want to use lint-staged to only run eslint on changed files.
Of course this is all done "on the client side" and you have to trust developers to have the tools installed and configured correctly and to not disable them. If you want to be more sure, you can run eslint as part of the CI and make it fail if it detects any error.

Automatically create grunt environment [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm a little lost as to how I should proceed, actually I don't even know where to start.
I've been working on a few wordpress sites lately, usually I create a dev environment using npm grunt.
I set up a folder and do npm init. Then I install all the grunt plugins I need such as: watch, sass, uglify etc... I then download wordpress and set up the gruntfile.js so that for example my sass will compile to my wordpress theme's stylesheet. Just the usual (I hope).
The thing is rather than always repeating the same step over and over I'd like to automate it ( same config step for each site ).
So here is my question, how do you go about creating a script that will automaticaly install grunt plugins and configure them, download the latest wordpress and set up the theme files ( domain name etc...)?
I don't need an explanation on how to do all these steps but just a few pointers on where to start and what tools to use would be great. Being quite the novice in script writing any information is good to use.
Try yeoman.
There is yeoman generator for wordperss boilerplate. It uses gulp instead grunt, but has same idea that you need.

Why npm was written in JavaScript? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I looked into npm's package.json file and discovered that npm is actually just a node.js package which has a lot of dependencies such as lodash. This means the situation that happened with left-pad package that broke a lot of npm packages could affect npm too.
I see that there is some tendency: pip is written in python, RubyGems in Ruby, Composer in PHP, Maven in Java and so on. But is it good to write a package manager in the target language?
More specifically npm was written using npm - JavaScript has nothing to do the npm leftpad incident. I can't imagine them not using their own product for several reasons:
It's a tool for managing software dependencies. They must use one. Would you propose they use someone else's? Of course, if you trust your product you're going to use it yourself.
The leftpad "incident" was a policy flaw more than a software flaw which they obviously did not predict or consider to be a serious concern until something serious happened. Therefore, why would this be a reason not to use npm.
Of the hundreds of thousands of packages hosted it can't have happened too often or it would have been fixed long ago. That's quite impressive.
It was pretty easy to fix just be updating the caching policy and so it's not a threat to npm.
Other package management tools have had similar problems (or worse). For example, an entire maven repository went offline due to lack of funding. This is unlikely to happen to npm because it is centralized and there are many large stakeholders who are interested in making sure it stays up.
Incidents like these make the ecosystem more stable and mature.
Like all stories, this will blow over in no time.
The very reason is that npm is the default package manager for the JavaScript runtime environment Node.js
It is natural for the package manager to be written in the language of its runtime.

Why should one use a module bundler (webpack) over a task-runner (grunt)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In the past I've used a yeoman-generator Grunt for all of my dev tasks. Usually when working on a project I'll use it with compass to compile my scss, and then package and uglify my JS, optimize images, lint my code, and many other useful things.
Recently I've seen a trend towards people to use webpack instead of grunt plugins for many of these tasks. Why is this? What is better about a module bundler in this regard?
I'm sure others have their reasons, but the biggest reason why I migrated to webpack (more specifically webpack-dev-server), is because it serves your bundle from memory (as opposed to disk), and its watcher will recompile only the files you changed while reusing the rest from cache (in memory). This allows development to be much faster. What I mean is, while I am actively editing code, I can ctrl + s in Sublime Text, by the time I alt + tab to Chrome, it's already done rebuilding. I had a grunt + browserify + grunt-watch setup before, and it took at least 5 seconds to rebuild every time I save (that is after I've made bunch of specialized optimizations in grunt build). That being said, I still integrated webpack with gulp, so I got a task-runner for my project.
EDIT 1: I also want to add that the old grunt + LESS/SASS + browserify + uglify + grunt-watch setup didn't scale well. I was working on a major project from scratch. At first it was fine, but then as the project grew, it got worse every day. Eventually it became incredibly frustrating to wait for grunt to finish building every ctrl + s. It also became apparently that I was waiting for bunch of unchanged files.
Another nice little thing is that webpack allows you to require stylesheets in .js, which establishes coupling of source files in the same module. Originally we established stylesheet dependencies by using import in .less files, but that disconnected source files in the same module and established a separate dependency graph. Again all of these are highly opinionated, and this is just my personal opinion. I'm sure others think differently.
EDIT 2: Alright after some discussions below, let me try to offer a more concise and less opinionated answer. One thing that webpack does really well is that can watch, read, preprocess and update cache and serve with minimal amount of file I/O and processing. Gulp pipe works pretty great, but when it comes to bundling step, it inevitably ends up having to read all of the files from a temp directory including those unchanged. As your project grow, wait time for this step grows as well. On the other hand, webpack-dev-server keeps everything cached in memory, so the amount of wait time during development is kept minimal. However to achieve this kind of memory caching, webpack will need to cover from watch to server, so it will need to know your preprocessing configs. Once you have configured webpack to do just that, you might as well just reuse the same configs for spiting out builds other than dev server. So we ended up in this situation. That being said, exactly what steps you want webpack to do is still up to your personal preferences. For example, I don't do image processing or lint in my dev server. In fact, my lint step is a totally separate target.

Dynamic installation and loading of Node.js module [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a Node.js module A that uses another Node.js module B from NPM. There are new versions of module B published on NPM. I want my module A to dynamically update to the latest versions of module B (module A always depends on the latest version of module B).Also, all the references of the imported module should also be updated dynamically.
I considered using NPM programmatically to install the latest available version of a module if a module is outdated. Are there better solutions to do this dynamically.
Also, how to forcefully update the module references imported through require?
Thanks!
At least to the first part of your question (module A always using the latest version of module B), you could always specify the dependency in your package.json with a major version wildcard:
{
"dependencies": {
"moduleB": "*"
}
}
Which would then allow you to always npm update to the latest version. (This would, however, potentially have the effect of introducing changes that break backwards compatibility in your module as npm uses Semantic Versioning.)
As for running npm update automatically, I have to ask - why is this necessary? What would be the benefit to users of your module? You should be, at the very least, "curating" updates to your dependencies to ensure no breaking changes are introduced. If you did set up a full automatic pipeline for updating dependencies (from npm updateing dependencies to git taging new versions to npm versioning these new versions to npm publishing these new versions, for example), you're sort of leaving users of your module out in the cold should any of these steps break compatibility with their code.
(Also, if this automatic dependency updating never breaks your module, then what is your module adding? Is it even doing anything non-trivial with the dependencies?)
It might seem cumbersome, but it's a better practice to update your dependencies with some craft and intention. See Semantic Versioning for more info.

Categories

Resources