Ember addon Vs normal package - javascript

I am trying to understand the differences between an Ember addon Vs normal package more from consuming side in my app.
So I would like to understand the following points;
Differences in consuming both ? like what do we need to import, Brocfile changes, etc How are these available to individual modules say inside a route or controller, etc
Is the installation process same for both ? Can both of them live in any repo or registry (like npm or bower registry)
How or where do they reside in the build output i.e. in dist folder ?
How do we decide whether to package something as addon Vs normal package (this is more from a developer perspective)?
You can also highlight any other significant differences.

Disclaimer: this is my understanding, but I have not built an addon myself, so I might have some misconceptions.
Ember addons are basically normal packages with some additional structure that makes them easier to integrate in an ember application.
When using an Ember addon, you import things exactly like with any normal package. The only difference comes from objects that need to be registered with the resolver (services, adapers, helpers, etc): they will be automatically detected, added to your project and registered. This is why, after installing, say ember-notify, you can just Ember.inject.service('notify') in some component/controller and it will work.
The details are for the addon author to choose. Usually, addons will register common objects that benefit from dependency injection (template helpers and services, mostly - though some addons define new injection types and come with some packaged, for instance ember-validations registers its validators). For other functionalities, you import things normally (import thing from 'addon/thing';).
Ember addons are installed using npm (you find them in your project, under node_modules). You can even install them with npm yourself, just remember to add them to package.json so they are included by ember build.
In the build output, they should simply be packaged, altogether, into assets/vendor.js.
I would say the rule of thumb should be: if it is ember-specific, make it an addon. Otherwise, stick to normal package. But really, an ember addon is basically a normal package that has a specific layout.

Related

Other way to share react ts components between projects

I have been looking for a way to mostly share some code between projects specifically for SPFX and fluent ui. We found 3 main ways to do that.
1.
Creating a component library is the way that seemed least complicated cause it uses the same infrastructure and do all building without the need to configure it.
But this adds some issues, we need to built and manually link the solution locally to make it work, this will also work if we put in a repo. so this is mitigated.
The second is that implicitly this will also require the fluent ui and react. Plus having to place it inside a SPFX component library project.
2.
I saw some promise using paths in ts and this works fine while using the ts compiler. It will go to the folder that your proj is referring and build it at calling time. which is great. But it did not work in SPFX.
3.
Another way was to have a post install to sync the folders which seems easy enough but I wonder how practical this is plus how people are doing it, if they are, how.
All I wanted to figure out now is a way to take my component code and share as if they were in a folder of my src or a simple extension of the code. No need to have extra dependencies or build steps, just the code that can be used as a ts/tsx file. ex:
shared lib:
//assuming I have react and fluentui already installed for the project.
import button from 'fluentui';
export const fancyCustomButtom = (props) => {
return (<Button text="Standard" />);
};
src project folder:
import {fancyCustomButtom} from 'shared-lib'
It is fine if it needs to build the files before we can use it but can we do it at build time or when the package is installed? also wouldn't it increase my bundle size by making both module dependent on things already available (react, fluentui)?
Given the way Microsoft have architected the loading of bundles in SharePoint and Teams - I believe an SPFX component library is the best way to share code between different solutions, particularly if you are looking to minimise bundle size...
Imagine you have a library for something re-usable: a form, a set of standard branded components - something of that nature. You could put your shared code in repos and add references to it - either by publishing your own repo publicly or using the npm install git+https://yourUrl syntax; but what effectively happens there is that your code is pulled down in to node_modules for each project, and any referenced module code is included in your bundles. If you have two, three, four or more webparts on the same page - using those same libraries, you're multiplying how many times that code is included on the page.
If you follow Microsoft's guide on setting up a component library project however, your npm link commands allow your types to be recognised in consuming projects without needing to actually include the bundled distribution code. You can deploy your library code once to the App Catalog, and if it's referenced in other solutions -- it's loaded on pages as needed: once.
I have found the development experience to be quite flaky at times, but it does work. When I run gulp clean on my library code, or come back to it after some time, I sometimes find that I need to run npm link and npm link my-project-name again as per the instructions in the above tutorial. Whenever you run gulp build on your library, you should also rebuild the project that consumes the library, either by using gulp build / bundle or by saving a file (if you're running gulp serve). This process works well for developing, and once it comes time to deploy, all you need to do is add a named reference to your library inside package.json and then deploy both .sppkg files to your App Catalog.
In terms of your last question re: bundle size - react is not actually included in the dependencies for an SPFX library project, but you will find it's available to use. When you build your library, if you take a look in the generated javascript in your dist folder, you will see it's listed as one of the dependencies for the webpacked content along with react-dom and ControlStrings. It's on line 1.
office-ui-fabric-react is included as a devDependency thanks to the #microsoft/sp-webpart-workbench package that gets scaffolded with all SPFX projects - and if you check your library's dist javascript, you will see that included components are being webpacked and included in your bundle. I'm not entirely sure if when you pull this code in to your consuming project, whether webpack then tree-shakes to de-duplicate and ensures only necessary code is included: I don't know. Someone else may be able to illuminate us there or provide a more accurate explanation of what's going on... I will update this response if anyone comments to let me know.
And finally, this is more of a personal way of working, but it may be worth consideration:
When developing a library, I sometimes reference it in other projects via a local npm install ../filepath command. This ensures that when I install the library as described, the consuming project installs any necessary dependencies. I'm able to tweak both projects if I need o. When it comes time to deploy, I commit my changes to both projects, deploy my library code to the App Catalog, and then npm uninstall the library from the consuming project and add a reference as described in the above tutorial. When I deploy projects that use my library, they just work.
I recently developed a library that uses pnpjs, in particular the #pnp/sp library that is used to talk to SharePoint. If you look at the Getting Started guide for that library, they expect you to pass a reference to your Application Customizer or Web Part context during setup, or explicitly set things up using a base URL and so forth - and of course, a library doesn't really have a page context of any sort - the whole point of this code is that it's reusable. So that poses a challenge. My solution was to do the setup in the consuming web part(s) and ensure that they pass a reference to the sp object (which is of type SPRest) to any code or components that exist in my library. My library has peerDependencies on those pnp libraries so that the code isn't duplicated in consuming projects. Sometimes you have to think about what your library needs to include in its bundle and what you expect consuming solutions to already have, and maybe find ways to ensure things aren't included that aren't needed.
For example, in the scenario you talk about, you may want to ensure fluentui or office-ui-fabric-react are only devDependencies or peerDependencies for your library. As long as your library and the project(s) consuming your library both use the right version(s) you shouldn't have any trouble, and you can document any pre-requisites with your library documentation. You can check which versions of these libraries are installed per the SPFX version you are currently using ie. SPFX v1.11 or v1.12 etc. Just run npm ls <packagename> to get a breakdown, or check your package.json file.

Can you import a `devDependency` in your code?

Mobx DevTool's README guides you to install it as dev dependency, and then import it into your code. That seems like a problem to me, because devDependencies, as explained by this SO answer, are:
... used for the build process, tools that help you
manage how the end code will end up, third party test modules, (ex.
webpack stuff)
If that's true, then is it a correct deduction that you shouldn't import a devDependency into your code?
It depends on how the code that will import the dependency will be used. If it will only be used in a development context, then yes, it should be a devDependency and it's fine for the code to use it.
If the code that uses it is going to be used by a consumer of your package/module/library, then it is a direct dependency and should be annotated as such.
It's kind of how you view the intention of the source file that consumes the dependency. If I install with npm install your-cool-package, I don't want, and shouldn't need the devDependencies installed because I'm likely not going to be building your module from source, benchmarking, or testing. I'm just going to consume it.
If I need the dependency to consume your module, then it isn't a devDependencies, it is just a straight up dependencies (or maybe a peerDependencies).
Ask yourself this: When I use your module, for it to function, does the dependency need to be there for it to function? If it does, it's a dependency. If not, it's a dev dependency.
If you author a plugin for something, it depends on the thing it plugs in to. If you author a plugin that helps people develop things, the plugin still depends on what it plugs in to, but the person installing your plugin would consider your plugin a development dependency because they don't need it when they aren't developing. However, your plugin still depends directly on the thing that it plugs in to.
It sounds like you are writing a plugin (let's call it cool-plugin) for a module (Mobx). I don't use Mobx, but it sounds like a development tool. Since it sounds like cool-plugin needs Mobx in order to do anything Mobx is a dependency of cool-plugin, but a consumer would consider cool-plugin a devDependencies of their own hypothetical consumer-module because they don't need it in production.
Because of that, you should consider Mobx a dependencies because your module doesn't make sense without it.
There is a case to be made that you should actually consider it a peerDependencies because consumer-module probably doesn't want you to install your own version of Mobx but instead wants you to interface with the one they should already be using.

Publish Aurelia component on NPM?

We have built a small-ish application using Aurelia, and would like to be able to integrate the app into a larger codebase. For example, it may be desirable to publish the Aurelia app on NPM, so other projects could integrate our code.
How can we build/publish the Aurelia app, so that it can be instantiated in a larger JavaScript application?
Typically, you wouldn't publish an entire Aurelia application to NPM, rather you would push plugins to NPM.
The simplest way to learn how to do this is to follow the example of the various Aurelia repos. We build the code in to multiple formats, but the most important are AMD and CommonJS (which is more important for your company is dependent on what tooling your company is using). Once you've built the code, the next step is to make sure your package.json file is set up correctly. It's best to copy the example of what we're doing in our own repos.
I would start by looking at either the dialog plugin or the templating-resources plugin. Both of these show good examples of how to publish your Aurelia plugin to NPM, whether it is a public or private npm feed.
https://github.com/aurelia/dialog/blob/master/package.json
https://github.com/aurelia/templating-resources/blob/master/package.json
Both of these plugins are set up to support Webpack, JSPM/SystemJS, and the Aurelia CLI out of the box.
I totally agree with #Ashley in not publishing larger applications to the global NPM registry. The big advantage of it is the simplicity of all that small packages which can be used to build large applications.
If you feel you got some reusable code, put in an own package and publish it.
Anyway, to give you an answer which does not require you to publish your complete app: you can include a complete repository (which is probobly what you are lookig for) in a different application via NPM.
npm install git://github.com/somegit/hubrepo.git
Or directly via the package.json:
"dependencies": {
"private-repo": "git+ssh://git#github.com:somegit/hubrepo.git#v1.0.0",
}
You can also do something similiar e.g. using JSPM. This would be simply:
jspm install your-app=github:OrgOrUser/your-app#your-branch
If you are facing long relative paths in your imports now, depending on your tooling, you may be interested in something like e.g. Resolving require paths with webpack which shows how to alias relative paths.
Sources/Links:
How to install a private NPM module without my own registry?
npm install private github repositories by dependency in package.json

NPM: should I use one of my dependencies' dependencies, or should I explicitly install it into the project at the root level?

TL;DR Summary
(I'm using Lodash as an example here, but it could be any other package)
In addition to using Lodash for its own purposes, my application also needs to import JavaScript from an NPM package that I created. The JavaScript in this package relies on Lodash as well. It's possible that each codebase may have installed a different version of Lodash. If JavaScript in my application and JavaScript in the installed package both import the same Lodash functions, then I want to avoid having to bundle two different versions of the same function. I understand that NPM is able to resolve the dependencies and that nothing will break, but the size of my application's JavaScript bundle will continue to grow as each codebase uses functions from different versions of the same libraries. It sounds like the only way to keep the versions in sync is to continuously monitor them and upgrade manually when appropriate, or to use the version provided by the installed package directly, without ever installing it into my application's own package.json. Is doing the latter a bad idea and is there no better way?
Original Question
At my company, we've created a Git repository that houses most of our UI component code. This repository also contains a static site generator, which transforms our UI component code into a "living style guide" website. The purpose of this website is to document and showcase our UI components on the web (similar to how PatternLab works).
We also distribute this code via NPM, so that it can be shared across multiple projects. Each project installs the NPM module as a dependency, then imports the SASS and JavaScript files contained within. The JavaScript has been written in ES6 and has not been bundled or transpiled. We've intentionally chosen not to distribute browser-ready code. Instead, each project is responsible for compiling its own SASS and bundling/transpiling its own JavaScript.
Most of our UI component JavaScript is simple and does not depend on any third-party libraries, so it's easy to import into our projects. However, some of our newer, more complex components rely on NPM packages such as Lodash, which presents a problem.
Obviously, we need to install Lodash in order for the static site generator to showcase our Lodash-reliant components inside of a web browser. Similarly, projects that consume the NPM package will also need to install Lodash, in order to create instances of these same components. This forces us to install Lodash twice: once in the UI component project, then again in the project that consumes the NPM package. This is problematic because the two projects could potentially install different versions of Lodash, which could lead to compatibility issues and/or increase the size of our JavaScript bundle.
One solution that I've discovered is to include Lodash under dependencies instead of devDependencies, in the UI component project. This way, when external projects install the UI component NPM module, Lodash will be installed along with it. This gives the project "free" access to Lodash without needing to explicitly install it itself. This is possible because NPM installs packages in a single, flat directory hierarchy, so it doesn't seem to matter if your project installs a package directly or if one of its dependencies exposes it as a dependency in it's own package.json. This eliminates version conflicts, since you don't have to install the package twice.
My question is, does this violate NPM best practices or is this how NPM is intended to work? After reading the NPM documentation and Googling for answers, it doesn't seem like this should be a problem. However, if what I'm suggesting is a bad idea, how else can I accomplish what I'm trying to do?
Here's a quick visual aid:
main.js
node_modules/
lodash/
foo/
bar.js
node_modules/
lodash/
main.js imports and uses Lodash. It also imports foo/bar.js, which uses Lodash too, but a potentially different version. Both files are ES6. main.js gets bundled and transpiled before being sent to the browser.
if is something you are directly using you should specify it in your package.json. it will be installed anyways but this way it will ensure that if your dependency removes that package as a dependency your project won't break

What is the difference between Bower and npm?

What is the fundamental difference between bower and npm? Just want something plain and simple. I've seen some of my colleagues use bower and npm interchangeably in their projects.
All package managers have many downsides. You just have to pick which you can live with.
History
npm started out managing node.js modules (that's why packages go into node_modules by default), but it works for the front-end too when combined with Browserify or webpack.
Bower is created solely for the front-end and is optimized with that in mind.
Size of repo
npm is much, much larger than bower, including general purpose JavaScript (like country-data for country information or sorts for sorting functions that is usable on the front end or the back end).
Bower has a much smaller amount of packages.
Handling of styles etc
Bower includes styles etc.
npm is focused on JavaScript. Styles are either downloaded separately or required by something like npm-sass or sass-npm.
Dependency handling
The biggest difference is that npm does nested dependencies (but is flat by default) while Bower requires a flat dependency tree (puts the burden of dependency resolution on the user).
A nested dependency tree means that your dependencies can have their own dependencies which can have their own, and so on. This allows for two modules to require different versions of the same dependency and still work. Note since npm v3, the dependency tree will be flat by default (saving space) and only nest where needed, e.g., if two dependencies need their own version of Underscore.
Some projects use both: they use Bower for front-end packages and npm for developer tools like Yeoman, Grunt, Gulp, JSHint, CoffeeScript, etc.
Resources
Nested Dependencies - Insight into why node_modules works the way it does
This answer is an addition to the answer of Sindre Sorhus. The major difference between npm and Bower is the way they treat recursive dependencies. Note that they can be used together in a single project.
On the npm FAQ: (archive.org link from 6 Sep 2015)
It is much harder to avoid dependency conflicts without nesting
dependencies. This is fundamental to the way that npm works, and has
proven to be an extremely successful approach.
On Bower homepage:
Bower is optimized for the front-end. Bower uses a flat dependency
tree, requiring only one version for each package, reducing page load
to a minimum.
In short, npm aims for stability. Bower aims for minimal resource load. If you draw out the dependency structure, you will see this:
npm:
project root
[node_modules] // default directory for dependencies
-> dependency A
-> dependency B
[node_modules]
-> dependency A
-> dependency C
[node_modules]
-> dependency B
[node_modules]
-> dependency A
-> dependency D
As you can see it installs some dependencies recursively. Dependency A has three installed instances!
Bower:
project root
[bower_components] // default directory for dependencies
-> dependency A
-> dependency B // needs A
-> dependency C // needs B and D
-> dependency D
Here you see that all unique dependencies are on the same level.
So, why bother using npm?
Maybe dependency B requires a different version of dependency A than dependency C. npm installs both versions of this dependency so it will work anyway, but Bower will give you a conflict because it does not like duplication (because loading the same resource on a webpage is very inefficient and costly, also it can give some serious errors). You will have to manually pick which version you want to install. This can have the effect that one of the dependencies will break, but that is something that you will need to fix anyway.
So, the common usage is Bower for the packages that you want to publish on your webpages (e.g. runtime, where you avoid duplication), and use npm for other stuff, like testing, building, optimizing, checking, etc. (e.g. development time, where duplication is of less concern).
Update for npm 3:
npm 3 still does things differently compared to Bower. It will install the dependencies globally, but only for the first version it encounters. The other versions are installed in the tree (the parent module, then node_modules).
[node_modules]
dep A v1.0
dep B v1.0
dep A v1.0 (uses root version)
dep C v1.0
dep A v2.0 (this version is different from the root version, so it will be an nested installation)
For more information, I suggest reading the docs of npm 3
TL;DR: The biggest difference in everyday use isn't nested dependencies... it's the difference between modules and globals.
I think the previous posters have covered well some of the basic distinctions. (npm's use of nested dependencies is indeed very helpful in managing large, complex applications, though I don't think it's the most important distinction.)
I'm surprised, however, that nobody has explicitly explained one of the most fundamental distinctions between Bower and npm. If you read the answers above, you'll see the word 'modules' used often in the context of npm. But it's mentioned casually, as if it might even just be a syntax difference.
But this distinction of modules vs. globals (or modules vs. 'scripts') is possibly the most important difference between Bower and npm. The npm approach of putting everything in modules requires you to change the way you write Javascript for the browser, almost certainly for the better.
The Bower Approach: Global Resources, Like <script> Tags
At root, Bower is about loading plain-old script files. Whatever those script files contain, Bower will load them. Which basically means that Bower is just like including all your scripts in plain-old <script>'s in the <head> of your HTML.
So, same basic approach you're used to, but you get some nice automation conveniences:
You used to need to include JS dependencies in your project repo (while developing), or get them via CDN. Now, you can skip that extra download weight in the repo, and somebody can do a quick bower install and instantly have what they need, locally.
If a Bower dependency then specifies its own dependencies in its bower.json, those'll be downloaded for you as well.
But beyond that, Bower doesn't change how we write javascript. Nothing about what goes inside the files loaded by Bower needs to change at all. In particular, this means that the resources provided in scripts loaded by Bower will (usually, but not always) still be defined as global variables, available from anywhere in the browser execution context.
The npm Approach: Common JS Modules, Explicit Dependency Injection
All code in Node land (and thus all code loaded via npm) is structured as modules (specifically, as an implementation of the CommonJS module format, or now, as an ES6 module). So, if you use NPM to handle browser-side dependencies (via Browserify or something else that does the same job), you'll structure your code the same way Node does.
Smarter people than I have tackled the question of 'Why modules?', but here's a capsule summary:
Anything inside a module is effectively namespaced, meaning it's not a global variable any more, and you can't accidentally reference it without intending to.
Anything inside a module must be intentionally injected into a particular context (usually another module) in order to make use of it
This means you can have multiple versions of the same external dependency (lodash, let's say) in various parts of your application, and they won't collide/conflict. (This happens surprisingly often, because your own code wants to use one version of a dependency, but one of your external dependencies specifies another that conflicts. Or you've got two external dependencies that each want a different version.)
Because all dependencies are manually injected into a particular module, it's very easy to reason about them. You know for a fact: "The only code I need to consider when working on this is what I have intentionally chosen to inject here".
Because even the content of injected modules is encapsulated behind the variable you assign it to, and all code executes inside a limited scope, surprises and collisions become very improbable. It's much, much less likely that something from one of your dependencies will accidentally redefine a global variable without you realizing it, or that you will do so. (It can happen, but you usually have to go out of your way to do it, with something like window.variable. The one accident that still tends to occur is assigning this.variable, not realizing that this is actually window in the current context.)
When you want to test an individual module, you're able to very easily know: exactly what else (dependencies) is affecting the code that runs inside the module? And, because you're explicitly injecting everything, you can easily mock those dependencies.
To me, the use of modules for front-end code boils down to: working in a much narrower context that's easier to reason about and test, and having greater certainty about what's going on.
It only takes about 30 seconds to learn how to use the CommonJS/Node module syntax. Inside a given JS file, which is going to be a module, you first declare any outside dependencies you want to use, like this:
var React = require('react');
Inside the file/module, you do whatever you normally would, and create some object or function that you'll want to expose to outside users, calling it perhaps myModule.
At the end of a file, you export whatever you want to share with the world, like this:
module.exports = myModule;
Then, to use a CommonJS-based workflow in the browser, you'll use tools like Browserify to grab all those individual module files, encapsulate their contents at runtime, and inject them into each other as needed.
AND, since ES6 modules (which you'll likely transpile to ES5 with Babel or similar) are gaining wide acceptance, and work both in the browser or in Node 4.0, we should mention a good overview of those as well.
More about patterns for working with modules in this deck.
EDIT (Feb 2017): Facebook's Yarn is a very important potential replacement/supplement for npm these days: fast, deterministic, offline package-management that builds on what npm gives you. It's worth a look for any JS project, particularly since it's so easy to swap it in/out.
EDIT (May 2019)
"Bower has finally been deprecated. End of story." (h/t: #DanDascalescu, below, for pithy summary.)
And, while Yarn is still active, a lot of the momentum for it shifted back to npm once it adopted some of Yarn's key features.
2017-Oct update
Bower has finally been deprecated. End of story.
Older answer
From Mattias Petter Johansson, JavaScript developer at Spotify:
In almost all cases, it's more appropriate to use Browserify and npm over Bower. It is simply a better packaging solution for front-end apps than Bower is. At Spotify, we use npm to package entire web modules (html, css, js) and it works very well.
Bower brands itself as the package manager for the web. It would be awesome if this was true - a package manager that made my life better as a front-end developer would be awesome. The problem is that Bower offers no specialized tooling for the purpose. It offers NO tooling that I know of that npm doesn't, and especially none that is specifically useful for front-end developers. There is simply no benefit for a front-end developer to use Bower over npm.
We should stop using bower and consolidate around npm. Thankfully, that is what is happening:
With browserify or webpack, it becomes super-easy to concatenate all your modules into big minified files, which is awesome for performance, especially for mobile devices. Not so with Bower, which will require significantly more labor to get the same effect.
npm also offers you the ability to use multiple versions of modules simultaneously. If you have not done much application development, this might initially strike you as a bad thing, but once you've gone through a few bouts of Dependency hell you will realize that having the ability to have multiple versions of one module is a pretty darn great feature. Note that npm includes a very handy dedupe tool that automatically makes sure that you only use two versions of a module if you actually have to - if two modules both can use the same version of one module, they will. But if they can't, you have a very handy out.
(Note that Webpack and rollup are widely regarded to be better than Browserify as of Aug 2016.)
Bower maintains a single version of modules, it only tries to help you select the correct/best one for you.
Javascript dependency management : npm vs bower vs volo?
NPM is better for node modules because there is a module system and you're working locally.
Bower is good for the browser because currently there is only the global scope, and you want to be very selective about the version you work with.
My team moved away from Bower and migrated to npm because:
Programmatic usage was painful
Bower's interface kept changing
Some features, like the url shorthand, are entirely broken
Using both Bower and npm in the same project is painful
Keeping bower.json version field in sync with git tags is painful
Source control != package management
CommonJS support is not straightforward
For more details, see "Why my team uses npm instead of bower".
Found this useful explanation from http://ng-learn.org/2013/11/Bower-vs-npm/
On one hand npm was created to install modules used in a node.js environment, or development tools built using node.js such Karma, lint, minifiers and so on. npm can install modules locally in a project ( by default in node_modules ) or globally to be used by multiple projects. In large projects the way to specify dependencies is by creating a file called package.json which contains a list of dependencies. That list is recognized by npm when you run npm install, which then downloads and installs them for you.
On the other hand bower was created to manage your frontend dependencies. Libraries like jQuery, AngularJS, underscore, etc. Similar to npm it has a file in which you can specify a list of dependencies called bower.json. In this case your frontend dependencies are installed by running bower install which by default installs them in a folder called bower_components.
As you can see, although they perform a similar task they are targeted to a very different set of libraries.
For many people working with node.js, a major benefit of bower is for managing dependencies that are not javascript at all. If they are working with languages that compile to javascript, npm can be used to manage some of their dependencies. however, not all their dependencies are going to be node.js modules. Some of those that compile to javascript may have weird source language specific mangling that makes passing them around compiled to javascript an inelegant option when users are expecting source code.
Not everything in an npm package needs to be user-facing javascript, but for npm library packages, at least some of it should be.

Categories

Resources