What's a good way to include an npm module in git - javascript

Kind of a noob question here...
We've got a private npm module - a library - that needs to be included in other projects. So far, very simple.
Currently, we're simply "remembering" to manually do an npm run buld before pushing changes to our git repo, and then dependent projects when they do an npm run whatever, they're setup to pull from our repo and use the latest version already "compiled" as a module.
So, there are issues with this approach:
It relies on humans being able to perfectly remember to do a build before pushing to origin. (inherently fragile).
VSCode constantly shows me the build-artifacts as if they were source files. Git similarly shows merge conflicts relating to those files - which -- really, aren't source at all. They're compilation artifacts, but I'm not sure I want to .gitignore them - because - well, the point of all of this is to create those artifacts for use in other projects... so they belong in the repo, just not as source files...
So I'm not sure how to untangle this mess.
I want:
A simple way to update the source that doesn't cause git to become upset about merge conflicts for build artifacts, but only for actual source files
A simple way to ensure that the build artifacts are always updated upon push to origin (in fact, I'd prefer that it build & run our mocha tests and refuse to do a push if that fails)
I'm only about 9mos into using git on github - so there's a ton I don't know...
Ideas for better ways to manage this / automate this - are most welcome!
The key to implementation is of course simplicity. If it's easy to do, I'm sure I'll do it, and can get others to do so. But if it's a huge hurdle every time, well, we all know how well that goes over for other devs...

Related

Other way to share react ts components between projects

I have been looking for a way to mostly share some code between projects specifically for SPFX and fluent ui. We found 3 main ways to do that.
1.
Creating a component library is the way that seemed least complicated cause it uses the same infrastructure and do all building without the need to configure it.
But this adds some issues, we need to built and manually link the solution locally to make it work, this will also work if we put in a repo. so this is mitigated.
The second is that implicitly this will also require the fluent ui and react. Plus having to place it inside a SPFX component library project.
2.
I saw some promise using paths in ts and this works fine while using the ts compiler. It will go to the folder that your proj is referring and build it at calling time. which is great. But it did not work in SPFX.
3.
Another way was to have a post install to sync the folders which seems easy enough but I wonder how practical this is plus how people are doing it, if they are, how.
All I wanted to figure out now is a way to take my component code and share as if they were in a folder of my src or a simple extension of the code. No need to have extra dependencies or build steps, just the code that can be used as a ts/tsx file. ex:
shared lib:
//assuming I have react and fluentui already installed for the project.
import button from 'fluentui';
export const fancyCustomButtom = (props) => {
return (<Button text="Standard" />);
};
src project folder:
import {fancyCustomButtom} from 'shared-lib'
It is fine if it needs to build the files before we can use it but can we do it at build time or when the package is installed? also wouldn't it increase my bundle size by making both module dependent on things already available (react, fluentui)?
Given the way Microsoft have architected the loading of bundles in SharePoint and Teams - I believe an SPFX component library is the best way to share code between different solutions, particularly if you are looking to minimise bundle size...
Imagine you have a library for something re-usable: a form, a set of standard branded components - something of that nature. You could put your shared code in repos and add references to it - either by publishing your own repo publicly or using the npm install git+https://yourUrl syntax; but what effectively happens there is that your code is pulled down in to node_modules for each project, and any referenced module code is included in your bundles. If you have two, three, four or more webparts on the same page - using those same libraries, you're multiplying how many times that code is included on the page.
If you follow Microsoft's guide on setting up a component library project however, your npm link commands allow your types to be recognised in consuming projects without needing to actually include the bundled distribution code. You can deploy your library code once to the App Catalog, and if it's referenced in other solutions -- it's loaded on pages as needed: once.
I have found the development experience to be quite flaky at times, but it does work. When I run gulp clean on my library code, or come back to it after some time, I sometimes find that I need to run npm link and npm link my-project-name again as per the instructions in the above tutorial. Whenever you run gulp build on your library, you should also rebuild the project that consumes the library, either by using gulp build / bundle or by saving a file (if you're running gulp serve). This process works well for developing, and once it comes time to deploy, all you need to do is add a named reference to your library inside package.json and then deploy both .sppkg files to your App Catalog.
In terms of your last question re: bundle size - react is not actually included in the dependencies for an SPFX library project, but you will find it's available to use. When you build your library, if you take a look in the generated javascript in your dist folder, you will see it's listed as one of the dependencies for the webpacked content along with react-dom and ControlStrings. It's on line 1.
office-ui-fabric-react is included as a devDependency thanks to the #microsoft/sp-webpart-workbench package that gets scaffolded with all SPFX projects - and if you check your library's dist javascript, you will see that included components are being webpacked and included in your bundle. I'm not entirely sure if when you pull this code in to your consuming project, whether webpack then tree-shakes to de-duplicate and ensures only necessary code is included: I don't know. Someone else may be able to illuminate us there or provide a more accurate explanation of what's going on... I will update this response if anyone comments to let me know.
And finally, this is more of a personal way of working, but it may be worth consideration:
When developing a library, I sometimes reference it in other projects via a local npm install ../filepath command. This ensures that when I install the library as described, the consuming project installs any necessary dependencies. I'm able to tweak both projects if I need o. When it comes time to deploy, I commit my changes to both projects, deploy my library code to the App Catalog, and then npm uninstall the library from the consuming project and add a reference as described in the above tutorial. When I deploy projects that use my library, they just work.
I recently developed a library that uses pnpjs, in particular the #pnp/sp library that is used to talk to SharePoint. If you look at the Getting Started guide for that library, they expect you to pass a reference to your Application Customizer or Web Part context during setup, or explicitly set things up using a base URL and so forth - and of course, a library doesn't really have a page context of any sort - the whole point of this code is that it's reusable. So that poses a challenge. My solution was to do the setup in the consuming web part(s) and ensure that they pass a reference to the sp object (which is of type SPRest) to any code or components that exist in my library. My library has peerDependencies on those pnp libraries so that the code isn't duplicated in consuming projects. Sometimes you have to think about what your library needs to include in its bundle and what you expect consuming solutions to already have, and maybe find ways to ensure things aren't included that aren't needed.
For example, in the scenario you talk about, you may want to ensure fluentui or office-ui-fabric-react are only devDependencies or peerDependencies for your library. As long as your library and the project(s) consuming your library both use the right version(s) you shouldn't have any trouble, and you can document any pre-requisites with your library documentation. You can check which versions of these libraries are installed per the SPFX version you are currently using ie. SPFX v1.11 or v1.12 etc. Just run npm ls <packagename> to get a breakdown, or check your package.json file.

Is it okay to link node modules files in webpack.config.js?

In some projects, I saw developers didn't link to node_modules files in webpack.config.js (eg. "./node_modules/boostrap/dist/js/boostrap.bundle.js"), instead, they copied the file to assets/js and linked it there. Some of my friends also told me that they prefer this option because they never feel safe with linking to node_modules (I guess as somebody may use npm update...?)
What would you call a "good practice"? Is it totally fine to link to node_modules? If not - what wrong can happen?
I used this method in small projects as I don't think there is a need for doubling files but in larger - for peace of mind - I used the path to assets
It can be okay to do it. Purely from the build step perspective, it doesn't make a difference.
The trade offs you are making between using the node modules as npm provides them (node_modules) and storing your own copies, in an assets or vendors folder, are about:
security
source code management & development efficiency
storage space
When all the thousands of developers around the world create little pet projects and push them to Github, it wouldn't make sense for all of them to store their own copy of JQuery and then push it into their Github repo. Instead we push a package.json file that lists it as a dependency, we do this for every third party dependency and prevent creating a repository where a lot (even most) of the code is not application code, but dependencies. That is good.
On the other hand, if a developer always downloads dependencies every time a new project is started/cloned/forked, you potentially risk, with every module download, the chance of installing a compromised package version. For this we solve with vulnerability scanners, semantic versioning and lock files (package-lock.json) to give you control on how and when you get updates.
Another problem with downloading always is the bandwidth it consumes. For this we solve with a local cache. So, even if you uninstall a module from one project, npm doesn't really delete it from your drive. It keeps a copy on a cache folder. This works really well for most developers, but not so much in an enterprise environment with massive applications.
A problem, that has impacted already the world severely, is that if a module author decides to delete the code then lots of apps stop working because they can't find the dependency anymore. See left-pad broke Node, Babel... (It also broke things at my work)
The issue with moving things out from node_modules to assets is that if your app has 100 dependencies, your are not going to want to do that 100 times. You might as well save in your source control system the complete source code found in node_modules. That comes at a price of course, that folder can have a huge size.
A good balance can be found by using different tools and approaches. Wether you vendorize third party dependencies (store your own copy) or not depends on what has the better cost/risk ratio in your situation.

test requirements for published npm packages [duplicate]

What exactly should I put in .npmignore?
Tests? Stuff like .travis.yml, .jshintrc? Anything that isn't needed when running the module (except the readme)?
I can't find any guidance on this.
As you probably found, NPM doesn't really state specifically what should go in there, rather they have a list of ignored-by-default files. Many people don't even use it as everything in your .gitignore is ignored in npm by default if .npmignore doesn't exist. Additionally, many files are already ignored by default regardless of settings and some files are always excluded from being ignored, as outlined in the link above.
There is not much official on what always should be there because it is basically a subset of .gitignore, but from what I gather from using node for 5-ish years, here's what I've come up with.
Note: By production I mean any time where your module is used by someone and not to develop on the module itself.
Pre-release cross-compiled sources
Pros: If you are using a language that cross-compiles into JavaScript, you can precompile before release and not include .coffee files in your package but keep tracking them in your git repository.
Build file leftovers
Pros: People using things like node-gyp might have object files that get generated during a build that never should go into the package.
Cons: This should always go into the .gitignore anyway. You must place these things inside here if you are using a .npmignore file already as it overrides .gitignore from npm's point of view.
Tests
Pros: Less baggage in your production code.
Cons: You cannot run tests on live environments in the slim chance there is a system-specific failure, such as an out of date version of node running that causes a test to fail.
Continuous integration settings/Meta files
Pros: Again, less baggage. Things such as .travis.yml are not required for using, testing, or viewing the code.
Non-readme docs and code examples
Pros: Less baggage. Some people exist in the school-of-thought where if you cannot express at least minimum viable functionality in your Readme, your module is too big.
Cons: People cannot see exhaustive documentation and code examples on their own file system. They would have to visit the repository (which also requires an internet connection).
Github-pages objects
Pros: You certainly don't need to litter your releases with CNAME files or placeholder index.htmls if you use your module serves double-duty as a gh-pages repository as well.
bower.json and friends
Pros: If you decide to build in your dependencies prior to release, you don't need the end-user to install bower then install more things with that. I would, personally, keep that stuff in the package. When I do an npm install, I should only be relying on npm and no other external sources.
Basically, you should ever use it if there is something you wish to keep out of your npm package but checked-in to your module's repo. It's not a long list of items, but npm would rather build in the functionality than having people stuck with irrelevant objects in their package.
I agree with lante's short and syntetic answer and SamT's big answer:
You should not include your tests in your package.
Your package should only contains production runtime files.
That will make your package more straightforward and faster to be dowloaded.
My contribution to those answers:
.npmignore is the blacklist way to achieve package file selection. But in a more practical way, you can whitelist files you need to include in your package using the files field in your package.json:
{
"files": [
"lib/",
"index.js"
]
}
I think that's simpler, future proof and have better semantics ;)
Just to clarify, anytime someone do npm install your-library, npm will download all source files that the package includes. Those files that were included in the .npmignore file in the source code of the package your-library will be excluded when publishing the lib, so users of your-library won't download them.
Know that people installing your library will need just your library running, anything else will be not necessary.
For example, when someone installs a library, its probably that he/she doesn't care about your .travis.yml or your .jshintrc files, or even some images, Grunt files, documentation, etc.
.npmignore could let your npm package to have less files, and faster to be downloaded
Don't include your tests. Oftentimes tests are like 5x the size of the actual codebase. As long as your tests are on Github, etc, that's good enough.
But what you absolutely should do is test your NPM package in its published format. Create some smoke tests that reside in the actual codebase, but are not part of the test suite.
You can read about testing your package after tarballing it, here:
https://github.com/ORESoftware/r2g
How to test an `npm publish` result, without actually publishing to NPM?

Excessive Recursion of Node.js Project Dependencies

So after a long day at work, I sit down and see an alert for Windows SkyDrive in the system tray:
Files can't be uploaded because the path of this file or folder is too long. Move the item to a different location or shorten its name.
C:\Users\Matthew\SkyDrive\Documents\Projects\Programming\angular-app\server\node_modules\grunt-contrib-nodeunit\node_modules\nodeunit\node_modules\tap\node_modules\runforcover\node_modules\bunker\node_modules\burrito\node_modules\traverse\example\stringify.js
... and for a while, I laughed at that technological limitation.
But then, I wondered: is that amount of directory recursion within a Node project really necessary? It would appear that the paths beyond "angular-app\server\node_modules" are simply dependencies of the project as a whole and might be better expressed as:
C:\Users\Matthew\SkyDrive\Documents\Projects\Programming\angular-app\server\node_modules\grunt-contrib-nodeunit\
C:\Users\Matthew\SkyDrive\Documents\Projects\Programming\angular-app\server\node_modules\nodeunit\
C:\Users\Matthew\SkyDrive\Documents\Projects\Programming\angular-app\server\node_modules\tap\
C:\Users\Matthew\SkyDrive\Documents\Projects\Programming\angular-app\server\node_modules\runforcover\
C:\Users\Matthew\SkyDrive\Documents\Projects\Programming\angular-app\server\node_modules\bunker\
C:\Users\Matthew\SkyDrive\Documents\Projects\Programming\angular-app\server\node_modules\burrito\
C:\Users\Matthew\SkyDrive\Documents\Projects\Programming\angular-app\server\node_modules\traverse\
I hadn't really given it much thought before, as package management in Node seems like magic compared to many platforms.
I would imagine that some large-scale Node.js projects even contain many duplicate modules (having the same or similar versions) which could be consolidated into a lesser amount. It could be argued that:
The increased amount of data stored and transmitted as a result of
duplicate dependencies adds to the cost of developing software.
Shallower directory structures (especially in this context) are
often easier to navigate and understand.
Excessively long path names can cause problems in some computing
environments.
What I am proposing (if such a thing does not exist) is a Node module that:
Recursively scans a Node project, collecting a list of the nested node_modules folders and how deeply they are buried in relation to the root of the project.
Moves the contents of each nested node_modules folder to the main node_modules folder, editing the require() calls of each .js file such that no references are broken.
Handles multiple versions of duplicate dependencies
If nothing else, it would make for an interesting experiment. What do you guys think? What potential problems might I encounter?
See if
npm dedupe
sets you right.
API Doc here
See fenestrate, npm-flatten, flatten-packages, npm dedupe, and multi-stage-installs.
Quoting Sam Mikes from this StackOverflow question:
npm will add dedupe-at-install-time by default. This is significantly more feasible than Node's module system changing, but it is still not exactly trivial, and involves a lot of reworking of some long-entrenched patterns.
This is (finally) currently in the works at npm, going by the name multi-stage-install, and is targeted for npm#3. npm development lead Forrest Norvell is going to spend some time running on Windows in the new year, so please do create windows-related issues on the npm issue tracker < https://github.com/npm/npm/issues >

Bundler for javascript, or how to source control external javascript files

I am in the process of converting an existing Rails 3.1 app I made for a client into a Backbone.js app with the Rails app only as a backend server extension. This is only a personal project of mine, to learn more about Backbone.js.
While setting up Backbone.js (using Backbone-on-Rails), I noticed I have some dependencies (like backbone-forms) that come from external sources and are frequently updated.
I've grown accustomed to using Bundler to manage my Ruby gems, but I haven't found anything similar for JavaScript files. I'm wondering if there is any way to do the same for Javascript (and possibly css) files.
Basically I can see three possibilities to solve this issue:
Simply write down all the sources for each JS file and check these sources from time to time to see what has changed.
Use some kind of existing "Bundler for Javascript" type of tool, I've been looking for something like this but have yet to find anything (good).
Since most of these JS files will be coming from Git anyway, use Git to get the files directly and use checkout to get the latest version from time to time.
I prefer the last option, but was hoping on some more input from other people who have gone this route or preferred some other way to tackle this issue (or is this even an issue?).
I figure the Git way seems easy, but I am not quite sure yet how I could make this work nicely with Rails 3.1 and Sprockets. I guess I'd try to checkout a single file using Git and have it be cloned in a directory that is accessible to Sprockets, but I haven't tried this yet.
Any thoughts?
You don't mention it in your alternatives, but ideally you should use something like Maven to manage your dependencies. Unfortunately, there are no public repositories for javascript files. This discussion lists some other options which might be of help to you: JQuery Availability on Maven Repositories
For now I've settled on using the Git solution combined with some guard-shell magic.
The steps I follow:
Create a dependencies directory somewhere on your local drive
Clone the repositories with javascript (or css) files you want to use in the app
Set up a custom guard-shell command to do the following:
group 'dependencies' do
guard 'shell' do
dependencies = '~/path/to/dependencies/'
watch(%r{backbone-forms/src/(backbone\-forms\.js)}) {|m| `cp #{dependencies + m[0]} vendor/assets/javascripts/#{m[1]}` }
end
end
Place the Guardfile at the root of the app directory
It takes some time to set things up, but after that, when you have the Guard running, and you pull changes into your dependencies, the required files are automatically copied to your application directory, which are then part of your repository.
It seems to work great, you need to do some work for each new file you want to include in the asset pipeline, but all that is required is cloning the repository in your dependencies directory and adding a single line to your Guardfile, for example for the backbone-form css:
watch(%r{backbone-forms/src/(backbone\-forms\.css)}) {|m| `cp #{dependencies + m[0]} vendor/assets/stylesheets/#{m[1]}` }
Also, the reason I added this Guard to a group is because I keep my dependencies outside the main application directory, which means guard normally doesn't check my dependencies directory. To make this work, I start up my main Guard processes using bundle exec guard -g main and use bundle exec guard -w ~/path/to/dependencies -g dependencies in a new terminal window/tab to specify the -w(atchdir).

Categories

Resources