The express-generator tool creates a file called bin/www and uses it as the application's main entry-point. I believe I've seen a couple of other modules do this as well, but the vast majority simply use index.js.
What is the rationale behind this? Of course I understand why you would split the server and the code for setting up the program into a separate modules, but why bin/www and not index.js? Why nest the main-entry point to a program two levels deeper than the stuff it calls? And remove the file-extension, making it even less descriptive?
Is there a clever, non-obvious reason behind this? Should I use this for my node-modules as well?
Thank you!
[edit]:
All good answers, thank you folks! I've accepted the one pointing out that this is standard behaviour for packages that include executables. Here's some more reading I've come across on this:
https://docs.npmjs.com/files/package.json#bin
https://blog.npmjs.org/post/118810260230/building-a-simple-command-line-tool-with-npm
You're used to running npm run, but not sysadmin. He will look for executables (attribute x) in thebin directory.
The entry point index.js is for node module. All packages that provide commands to run on the console contain the bin directory.
The extension is removed because it is not a script, but as a program. And these do not have extensions.
express-generator create a basic structure for an express application. By convention, the entry point of the app is index.js or app.js. In fact, express-generator create an app.js at the root of the application with the initial setup of express.
Also by convention, the bin/ directory is used for binary files, and by extension for the scripts you can directly launch (note the shebang at the first line of www file). This is common on Linux that binary file has no extension and it could explain the choice to keep this habit for this file.
www, by convention again, is used for naming web application (like /var/www/html in Apache server)
Anyway, as the documentation says,
The app structure created by the generator is just one of many ways to
structure Express apps. Feel free to use this structure or modify it
to best suit your needs.
See also this answer who talk about the core structure of express between version 3 & 4, with the removing of external module.
Related
I was wondering if anyone here had some ideas or experience getting rid of long, hard-to-maintain relative imports inside of large node.js cloud functions projects. We’ve found that the approach which uses local NPM packages is very sub-optimal because of how quickly we tend to roll out and test new packages and functionality, and refactoring from JS to TS is impossible for us at the moment. We'd love to do it in the near future but are so slammed as it is currently :(
Basically what I’m trying to do in cloud functions is go from const {helperFunction} = require(‘../../../../../../helpers’)
to
const {helperFunction} = require(‘helpers’)
I have been unable to get babel or anything similar to that working in cloud functions. Intuitively I feel like there is an obvious solution to this beyond something like artifact registry or local NPM packages but i’m not seeing it! Any help would be greatly, greatly appreciated :)
For CommonJS modules in general, you have several options. I don't know the Google Cloud environment so you will have to decide which options seem appropriate to it.
The most attractive option to me is #5 as it's just a built-in search up the directory tree, designed specifically to look in the node_modules sub-directories of your parent directories. If you can install the shared modules in that way, then nodejs should be able to find them as long as your directory hierarchy is retained by Google Cloud.
Options #3 and #4 are hacking on the loader which has its own risks, but does give you imlementation flexibility as you could implement your own prefix that looks in a particular spot. But, it's hacking and may or may not work in the Google cloud environment.
Options #1 and #2 rely on environment variables and shared directories which may or may not be relevant in the Google cloud environment.
You can specify the environment variable NODE_PATH as a colon-delimited (or semi-colon on Windows) list of paths to search for modules. Doc here.
In addition, nodejs will search $HOME/.node_modules and $HOME/.node_libraries where $HOME is the user's home directory.
There is a package called module-alias here that is designed specifically to help you solve this. You define module aliases in your package.json, import this one module and then you can use the directory aliases in your require() statements.
You can make your own pre-processor for resolving a module filename that is being loaded by require. You cando this by monkey patching Module._resolveFilename to either modify the filename passed to it or to add additional search paths to the options argument. This is the general concept that the module-alias package (mentioned in point #3 above) uses.
If the actual location of the helper module you want to load is in a node_modules directory somewhere above your current module directory on this volume, it can be found automatically as long as the require is just a filename as in require("helpers"). An example in the doc here describes this:
For example, if the file at /home/ry/projects/foo.js called require('bar.js'), then Node.js would look in the following locations, in this order:
/home/ry/projects/node_modules/bar.js
/home/ry/node_modules/bar.js
/home/node_modules/bar.js
/node_modules/bar.js
You can see how it automatically searches up the directory tree looking in each parent node_modules sub-directory all the way up to the root. If you put these common, shared modules in your main project file's node_modules or in any directory above, then it will be found automatically. This might be the simplest way to do things as it's just a directory structuring and removing of all the ../../ stuff you have in the paths. Clearly these common, shared modules are already located somewhere common - you just need to make sure they're in this search hierarchy so they can be found automatically.
Note this info is for CommonJS modules and may be different for ESM modules.
This seems like a situation that must be incredibly common, but I've yet to find a good solution to it. I'm a little new to modern Javascript, so please forgive me (or better yet, correct me) if I'm using the wrong terminology here and there.
I'm developing a web application. It's got a server, running as Javascript (ES6, I believe - using import/export) in node.js, using express.js for a router (and built by express-cli). And it's got a client side, also Javascript, mostly Vue modules (and built by vue-cli). I admit I don't really understand a lot of the boilerplate build code express-cli and vue-cli emitted, but it does work. I am not using any of a long list of frameworks that are assumed in the many pages google found for me when I tried to find an answer to this question.
Obviously, the client and server will be sending data structures (instances of various classes) back and forth, which I know how to do. And these ought to be the same class definitions.
I made an attempt to make webpack build both server and client, and that failed, so now I've split the application into two projects, each with its own folder tree, its own package.json, its own node_modules, and I'm using just webpack-dev-server for the client and nodemon/babel for the server. A third folder contains the shared code, which is imported by the client and the server. I got this to work well enough for proof of concept, but getting both sides (and Visual Studio Code) to recognize that the shared code is part of them is turning out to be a challenge, and I'm pretty sure I'm just going down the wrong path.
So, my question is, what is currently considered the best practice (or at least, a good practice) way to structure and build a client/server application of this type? An ideal answer to this question would include both folder structure and enough of the major configuration files to help me figure out how to write mine. A reference to an up-to-date and reliable source of information on this topic would satisfy me nicely.
I suspect that the right answer includes merging everything back into a single project and doing something clever with the webpack config files and maybe project.json... but what exactly that clever thing might be has so far eluded me.
Thanks!
In my opinion, you're right to use separate folders for your node.js(server)/vue.js(client app) as they're effectively 2 separate projects.
Apart from sharing some configs, utilities and validation, the app and server typically have little overlap: they're typically doing 2 very different things, require different build tools, different runtime environments, have different security concerns, and if you consider the future possibility of leveraging your client codebase to create a native IOS/android... app the difference between the 2 codebases will only increase.
For my current project, I have a folder structure whereby my server resides in the project root and my client is in a subfolder /app. Here's an simplified outline of how I've structured my node.js/vue.js project:
constants
config.js (server environment, database connection, api keys etc)
settings.js (business logic)
pricing.js
drivers
auth.js
db-connect.js
email.js
sms.js
socket-io-server.js
models (mongoose database models)
node_modules
routes (express routes)
api.js
auth.js
schema (json schema for automated validation)
login-validation.js
register-validation.js
services
billing.js
validation.js
app (this is my client sub-project sharing some server modules)
public (the output of the webpack client bundle including index.html)
src (ES6 source code, images, SASS/Stylus, fonts etc)
css
html (handlebars templates)
index.hbs
home.hbs
account.hbs
pricing.hbs
img
js
api
client-services
socket-io-client.js
store.js
router.js
vuex-utils.js
dom-utils.js
components (Vue components)
Profile.vue
Payment.vue
index.js (root and entry point for webpack)
webpack.config.js
.env.development.js
.env.production.js
package.json
server.js (node.js server root entry point)
This is by no means prescriptive. Notice, that I've organized the project files largely by function. Here's a link to an article proposing structuring around features.
A shared (client/server) module folder has pros and cons. The main downside I can see is that module sharing changes during development. For that reason my server codebase and modules reside in the project root folder - and some are also imported by the client app. The project root folder then naturally hosts a shared package.json and node_modules folder.
As your app develops, and you gain insight, it's fine to re-structure as the need arises (some IDEs make refactoring easier than others). If the Visual Studio Code IDE or webpack are not working well with your folder structure, it may be a configuration issue or the problem may stem from incorrect require/import paths. IDE inspections may help you find those errors or your IDE may struggle to be useful because of those errors.
My IDE (webstorm) and webpack v4 have no problems with the above folder structure or the location of modules in general (I have re-structured my app in many different ways) and got more adept at optimally configuring my IDE and webpack.
Webpack is very specific about the location of it's configuration file, input/output paths, whether it is executed from the project root or /app folder and it can take a lot of time to get it working properly. Nevertheless, it will locate modules referenced by the correct require/import paths. Here's the part of my webpack.config.js that sets up file entry/output.
const sourcePath = './src';
module.exports = {
entry: [
sourcePath + '/js/index.js'
],
output: {
path: __dirname + '/public', //location of webpack output files
publicPath: '/', //address that browser will request the webpack files
filename: 'js/index.[contenthash].js', //output filename
chunkFilename: 'js/chunk.[contenthash].js' //chunk filename
},
...
I've located my webpack.config.js and execute webpack inside the /app subfolder. My package.json scripts section to start the node.js server and have webpack build & watch my app files is:
"scripts": {
"start": "if [$NODE_ENV == 'production']; then node server.js; else nodemon server.js; fi",
"build": "cd ./app; webpack;"
},
Which allows me to start the node server with:
> npm start
and have webpack watch/re-build my bundle with:
> npm run build
Visual Studio Code has a rich IntelliSense experience and a feature called Automatic Type Acquisition which should allow it to support modules you import regardless of their location. This article provides more information on configuring VS Code for node.js.
I have one code base for both Web and NodeWebkit (NW) application.
I use the following stack:
- React
- Hapi
- Sequelize
- Windows environment
Web version of the application uses MySQL, while NW uses Sqlite. It all works fine. I have config file that compiles application for what I need (web or NW).
The problem that I face now is how to deploy the NW application. Idea is to provide NW applicaiton to a client, where he will open it clicking the icon.
Since I use the Node for the NW version, and the application uses many modules which are stored in node_modules, I face a challenge how to pack it all up.
My idea is to make an Windows installer. User will click it and the installer will extract all files to the destination. And also make an icon on the user desktop to run it.
Problem is with the Windows file name limitation. Inside the node_modules, there are many subdirectories that simply violate the Windows limitation. I cant even copy the node_modules folder. I cant even delete it. Well sure I can copy it If I zip it... or remove manually long folders.
I have not yet started working on the installer, but I am thinking I will hit the wall with this approach.
Does anyone have an idea how to make this deployment?
How can I integrate NPM3 in NW?
My plan now is to make Windows installer. That windows installer will install normally application files. The node_modules will be zipped previously and placed inside the installer. Installer will then simply unzip it to the destionation folder.
I will post my progress here.
Some update here.
Main issue here was the depth of the node_modules. I have many modules in node_modules, and after some thinking I figured out there is a simple rule there. Some modules are server side modules, while other ones are used by react.
And since Webpack already creates a huge files in which all of the modules are already included, I simply do not need them at all.
So I have removed all front end side modules(babel modules, react-*), and left only server side (Hapi, sequelize...). Miracle happened, application run and was much faster at the startup.
I am going to use Inno setup to make a manifest file, and it should be good to go.
I am still not out of the danger zone, as developer might need a server side module, which has huge depth. But I will think about that if it happens.
More to follow...
actually in nodejs you can do the following:
1-Create another folder inside your project folder for example "server_modules"
2-In the created folder create another package.json file and install any modules needed for server out there
3-All these modules will be accessible as normal node_modules using require('module_name') and you can delete "server_modules" folder when you package your desktop version if you don't need it
Note: this approach used by some developers to achive micro services in nodejs but it is useful in your case
There seems to be silent agreement on directory structure of node.js. At least I couldn't find any information on official site.
From what I've understand browsing opensource projects.
Usually project has /bin and /lib directory.
/bin contains modules entry point[s].
/lib contains 'helper' code.
I guess this is based on Unix directory structure.
And it makes sense for good-old compiled programs with executables and dlls.
AFAIK /lib was used to share libraries between programs.
In node, however, true dependencies are in node_modules.
/lib is used for application code.
Why not use /src?
This model is designed from execution point of view. But all projects, I've seen, use it to structure the code under develpment too. Also I've seen github-projects where whole module is just one file in bin directory.
I'm new to node.js and would like to know how people in community came to those decisions.
I guess what you mentioned about lib and src happened because of node philosophy to design small modules, not only in terms of code size, but most importantly in terms of scope. This principle has its roots in unix philosophy, particularly -
Small is beautiful.
Make each program do one thing well.
The architecture that I follow for developing web applications is broken into models, services, routes and controllers. I use lib for utilities and helper functions. Also a lot of node applications are built using express which itself has minimal api. In short, it is basically a matter of choice to choose convention or configuration.
If I have a node.js application that is filled with many require statements, how can I compile this into a single .js file? I'd have to manually resolve the require statements and ensure that the classes are loaded in the correct order. Is there some tool that does this?
Let me clarify.
The code that is being run on node.js is not node specific. The only thing I'm doing that doesn't have a direct browser equivalent is using require, which is why I'm asking. It is not using any of the node libraries.
You can use webpack with target: 'node', it will inline all required modules and export everything as a single, standalone, one file, nodejs module
https://webpack.js.org/configuration/target/#root
2021 edit: There are now other solutions you could investigate, examples.
Namely:
https://esbuild.github.io
https://github.com/huozhi/bunchee
Try below:
npm i -g #vercel/ncc
ncc build app.ts -o dist
see detail here https://stackoverflow.com/a/65317389/1979406
If you want to send common code to the browser I would personally recommend something like brequire or requireJS which can "compile" your nodeJS source into asynchronously loading code whilst maintaining the order.
For an actual compiler into a single file you might get away with one for requireJS but I would not trust it with large projects with high complexity and edge-cases.
It shouldn't be too hard to write a file like package.json that npm uses to state in which order the files should occur in your packaging. This way it's your responsibility to make sure everything is compacted in the correct order, you can then write a simplistic node application to reads your package.json file and uses file IO to create your compiled script.
Automatically generating the order in which files should be packaged requires building up a dependency tree and doing lots of file parsing. It should be possible but it will probably crash on circular dependencies. I don't know of any libraries out there to do this for you.
Do NOT use requireJS if you value your sanity. I've seen it used in a largish project and it was an absolute disaster ... maybe the worst technical choice made at that company. RequireJS is designed to run in-browser and to asynchronously and recursively load JS dependencies. That is a TERRIBLE idea. Browsers suck at loading lots and lots of little files over the network; every single doc on web performance will tell you this. So you'll very very quickly end up needing a solution to smash your JS files together ... at which point, what's the point of having an in-browser dependency resolution mechanism? And even though your production site will be smashed into a single JS file, with requireJS, your code must constantly assume that any dependency might or might not be loaded yet; in a complex project, this leads to thousands of async load barriers wrapping every interaction point between modules. At my last company, we had some places where the closure stack was 12+ levels deep. All that "if loaded yet" logic makes your code more complex and harder to work with. It also bloats the code increasing the number of bytes sent to the client. Plus, the client has to load the requireJS library itself, which burns another 14.4k. The size alone should tell you something about the level of feature creep in the requireJS project. For comparison, the entire underscore.js toolkit is only 4k.
What you want is a compile-time step for smashing JS together, not a heavyweight framework that will run in the browser....
You should check out https://github.com/substack/node-browserify
Browserify does exactly what you are asking for .... combines multiple NPM modules into a single JS file for distribution to the browser. The consolidated code is functionally identical to the original code, and the overhead is low (approx 4k + 140 bytes per additional file, including the "require('file')" line). If you are picky, you can cut out most of that 4k, which provides wrappers to emulate common node.js globals in the browser (eg "process.nextTick()").