I'm working on a node+express+mongo project, and I am wondering how the project architecture would be when working with versioning, should I put all the folders in /v1 and then make the same thing for /v2, or is it just the routes, I'm searching for the best practice of this case
It depends on what versioning tool you use. With git I would do different branches for major versions at least and add the version number to the config by all of them, so the routing and URI creation would know what the version number we are talking about. I would not add the version number to the directory structure.
I am not sure about other versioning systems, maybe in some of them you would need to have separate directories for each major version, but if so, then code reusage will be an issue and you have to separate the common parts to a common directory or to a different project.
Related
I have been looking for a way to mostly share some code between projects specifically for SPFX and fluent ui. We found 3 main ways to do that.
1.
Creating a component library is the way that seemed least complicated cause it uses the same infrastructure and do all building without the need to configure it.
But this adds some issues, we need to built and manually link the solution locally to make it work, this will also work if we put in a repo. so this is mitigated.
The second is that implicitly this will also require the fluent ui and react. Plus having to place it inside a SPFX component library project.
2.
I saw some promise using paths in ts and this works fine while using the ts compiler. It will go to the folder that your proj is referring and build it at calling time. which is great. But it did not work in SPFX.
3.
Another way was to have a post install to sync the folders which seems easy enough but I wonder how practical this is plus how people are doing it, if they are, how.
All I wanted to figure out now is a way to take my component code and share as if they were in a folder of my src or a simple extension of the code. No need to have extra dependencies or build steps, just the code that can be used as a ts/tsx file. ex:
shared lib:
//assuming I have react and fluentui already installed for the project.
import button from 'fluentui';
export const fancyCustomButtom = (props) => {
return (<Button text="Standard" />);
};
src project folder:
import {fancyCustomButtom} from 'shared-lib'
It is fine if it needs to build the files before we can use it but can we do it at build time or when the package is installed? also wouldn't it increase my bundle size by making both module dependent on things already available (react, fluentui)?
Given the way Microsoft have architected the loading of bundles in SharePoint and Teams - I believe an SPFX component library is the best way to share code between different solutions, particularly if you are looking to minimise bundle size...
Imagine you have a library for something re-usable: a form, a set of standard branded components - something of that nature. You could put your shared code in repos and add references to it - either by publishing your own repo publicly or using the npm install git+https://yourUrl syntax; but what effectively happens there is that your code is pulled down in to node_modules for each project, and any referenced module code is included in your bundles. If you have two, three, four or more webparts on the same page - using those same libraries, you're multiplying how many times that code is included on the page.
If you follow Microsoft's guide on setting up a component library project however, your npm link commands allow your types to be recognised in consuming projects without needing to actually include the bundled distribution code. You can deploy your library code once to the App Catalog, and if it's referenced in other solutions -- it's loaded on pages as needed: once.
I have found the development experience to be quite flaky at times, but it does work. When I run gulp clean on my library code, or come back to it after some time, I sometimes find that I need to run npm link and npm link my-project-name again as per the instructions in the above tutorial. Whenever you run gulp build on your library, you should also rebuild the project that consumes the library, either by using gulp build / bundle or by saving a file (if you're running gulp serve). This process works well for developing, and once it comes time to deploy, all you need to do is add a named reference to your library inside package.json and then deploy both .sppkg files to your App Catalog.
In terms of your last question re: bundle size - react is not actually included in the dependencies for an SPFX library project, but you will find it's available to use. When you build your library, if you take a look in the generated javascript in your dist folder, you will see it's listed as one of the dependencies for the webpacked content along with react-dom and ControlStrings. It's on line 1.
office-ui-fabric-react is included as a devDependency thanks to the #microsoft/sp-webpart-workbench package that gets scaffolded with all SPFX projects - and if you check your library's dist javascript, you will see that included components are being webpacked and included in your bundle. I'm not entirely sure if when you pull this code in to your consuming project, whether webpack then tree-shakes to de-duplicate and ensures only necessary code is included: I don't know. Someone else may be able to illuminate us there or provide a more accurate explanation of what's going on... I will update this response if anyone comments to let me know.
And finally, this is more of a personal way of working, but it may be worth consideration:
When developing a library, I sometimes reference it in other projects via a local npm install ../filepath command. This ensures that when I install the library as described, the consuming project installs any necessary dependencies. I'm able to tweak both projects if I need o. When it comes time to deploy, I commit my changes to both projects, deploy my library code to the App Catalog, and then npm uninstall the library from the consuming project and add a reference as described in the above tutorial. When I deploy projects that use my library, they just work.
I recently developed a library that uses pnpjs, in particular the #pnp/sp library that is used to talk to SharePoint. If you look at the Getting Started guide for that library, they expect you to pass a reference to your Application Customizer or Web Part context during setup, or explicitly set things up using a base URL and so forth - and of course, a library doesn't really have a page context of any sort - the whole point of this code is that it's reusable. So that poses a challenge. My solution was to do the setup in the consuming web part(s) and ensure that they pass a reference to the sp object (which is of type SPRest) to any code or components that exist in my library. My library has peerDependencies on those pnp libraries so that the code isn't duplicated in consuming projects. Sometimes you have to think about what your library needs to include in its bundle and what you expect consuming solutions to already have, and maybe find ways to ensure things aren't included that aren't needed.
For example, in the scenario you talk about, you may want to ensure fluentui or office-ui-fabric-react are only devDependencies or peerDependencies for your library. As long as your library and the project(s) consuming your library both use the right version(s) you shouldn't have any trouble, and you can document any pre-requisites with your library documentation. You can check which versions of these libraries are installed per the SPFX version you are currently using ie. SPFX v1.11 or v1.12 etc. Just run npm ls <packagename> to get a breakdown, or check your package.json file.
The express-generator tool creates a file called bin/www and uses it as the application's main entry-point. I believe I've seen a couple of other modules do this as well, but the vast majority simply use index.js.
What is the rationale behind this? Of course I understand why you would split the server and the code for setting up the program into a separate modules, but why bin/www and not index.js? Why nest the main-entry point to a program two levels deeper than the stuff it calls? And remove the file-extension, making it even less descriptive?
Is there a clever, non-obvious reason behind this? Should I use this for my node-modules as well?
Thank you!
[edit]:
All good answers, thank you folks! I've accepted the one pointing out that this is standard behaviour for packages that include executables. Here's some more reading I've come across on this:
https://docs.npmjs.com/files/package.json#bin
https://blog.npmjs.org/post/118810260230/building-a-simple-command-line-tool-with-npm
You're used to running npm run, but not sysadmin. He will look for executables (attribute x) in thebin directory.
The entry point index.js is for node module. All packages that provide commands to run on the console contain the bin directory.
The extension is removed because it is not a script, but as a program. And these do not have extensions.
express-generator create a basic structure for an express application. By convention, the entry point of the app is index.js or app.js. In fact, express-generator create an app.js at the root of the application with the initial setup of express.
Also by convention, the bin/ directory is used for binary files, and by extension for the scripts you can directly launch (note the shebang at the first line of www file). This is common on Linux that binary file has no extension and it could explain the choice to keep this habit for this file.
www, by convention again, is used for naming web application (like /var/www/html in Apache server)
Anyway, as the documentation says,
The app structure created by the generator is just one of many ways to
structure Express apps. Feel free to use this structure or modify it
to best suit your needs.
See also this answer who talk about the core structure of express between version 3 & 4, with the removing of external module.
I recently started an endeavor on creating a VSTS Extension. I utilized bower to automatically pull in the latest vss-web-extension-sdk component, and everything works fine so far. One thing that bothers me is that I had to reference the script in the following way:
<script src="bower_components/vss-web-extension-sdk/lib/VSS.SDK.min.js"></script>
I understand that I can change the bower components directory, but should it be a unique name or can it be scripts, which is the folder where my own scripts reside?
Additionally:
Should I (and how do I) concatenate components together when it comes to production?
Should I minify all of my components (Javascript) together, even though I'm already provided *.min.js files?
I've come across other similar questions relating to this, but I'm not utilizing Grunt or Gulp, just npm and bower. I ask this because I want to ensure that my production code does not identify which tooling I used (npm, bower), and I want the production code to be as proper as possible, which may require me to minify all JS together (I'm not sure on this one)?
There seems to be silent agreement on directory structure of node.js. At least I couldn't find any information on official site.
From what I've understand browsing opensource projects.
Usually project has /bin and /lib directory.
/bin contains modules entry point[s].
/lib contains 'helper' code.
I guess this is based on Unix directory structure.
And it makes sense for good-old compiled programs with executables and dlls.
AFAIK /lib was used to share libraries between programs.
In node, however, true dependencies are in node_modules.
/lib is used for application code.
Why not use /src?
This model is designed from execution point of view. But all projects, I've seen, use it to structure the code under develpment too. Also I've seen github-projects where whole module is just one file in bin directory.
I'm new to node.js and would like to know how people in community came to those decisions.
I guess what you mentioned about lib and src happened because of node philosophy to design small modules, not only in terms of code size, but most importantly in terms of scope. This principle has its roots in unix philosophy, particularly -
Small is beautiful.
Make each program do one thing well.
The architecture that I follow for developing web applications is broken into models, services, routes and controllers. I use lib for utilities and helper functions. Also a lot of node applications are built using express which itself has minimal api. In short, it is basically a matter of choice to choose convention or configuration.
I am in the process of converting an existing Rails 3.1 app I made for a client into a Backbone.js app with the Rails app only as a backend server extension. This is only a personal project of mine, to learn more about Backbone.js.
While setting up Backbone.js (using Backbone-on-Rails), I noticed I have some dependencies (like backbone-forms) that come from external sources and are frequently updated.
I've grown accustomed to using Bundler to manage my Ruby gems, but I haven't found anything similar for JavaScript files. I'm wondering if there is any way to do the same for Javascript (and possibly css) files.
Basically I can see three possibilities to solve this issue:
Simply write down all the sources for each JS file and check these sources from time to time to see what has changed.
Use some kind of existing "Bundler for Javascript" type of tool, I've been looking for something like this but have yet to find anything (good).
Since most of these JS files will be coming from Git anyway, use Git to get the files directly and use checkout to get the latest version from time to time.
I prefer the last option, but was hoping on some more input from other people who have gone this route or preferred some other way to tackle this issue (or is this even an issue?).
I figure the Git way seems easy, but I am not quite sure yet how I could make this work nicely with Rails 3.1 and Sprockets. I guess I'd try to checkout a single file using Git and have it be cloned in a directory that is accessible to Sprockets, but I haven't tried this yet.
Any thoughts?
You don't mention it in your alternatives, but ideally you should use something like Maven to manage your dependencies. Unfortunately, there are no public repositories for javascript files. This discussion lists some other options which might be of help to you: JQuery Availability on Maven Repositories
For now I've settled on using the Git solution combined with some guard-shell magic.
The steps I follow:
Create a dependencies directory somewhere on your local drive
Clone the repositories with javascript (or css) files you want to use in the app
Set up a custom guard-shell command to do the following:
group 'dependencies' do
guard 'shell' do
dependencies = '~/path/to/dependencies/'
watch(%r{backbone-forms/src/(backbone\-forms\.js)}) {|m| `cp #{dependencies + m[0]} vendor/assets/javascripts/#{m[1]}` }
end
end
Place the Guardfile at the root of the app directory
It takes some time to set things up, but after that, when you have the Guard running, and you pull changes into your dependencies, the required files are automatically copied to your application directory, which are then part of your repository.
It seems to work great, you need to do some work for each new file you want to include in the asset pipeline, but all that is required is cloning the repository in your dependencies directory and adding a single line to your Guardfile, for example for the backbone-form css:
watch(%r{backbone-forms/src/(backbone\-forms\.css)}) {|m| `cp #{dependencies + m[0]} vendor/assets/stylesheets/#{m[1]}` }
Also, the reason I added this Guard to a group is because I keep my dependencies outside the main application directory, which means guard normally doesn't check my dependencies directory. To make this work, I start up my main Guard processes using bundle exec guard -g main and use bundle exec guard -w ~/path/to/dependencies -g dependencies in a new terminal window/tab to specify the -w(atchdir).