Is an API and Router two separate things? - javascript

I have a Node.js with Express.js app. I have folders like such:
src
/models
/router
/store
/api/v1
index.js
My router folder has a file called index.router.js which contains my app's routes e.g:
import UserAPI from '../api/v1/user.api.js'
expressRouter.post('/register', function async function (req, res) {
await UserAPI.registerUser({
EMail: req.body.EMail,
Password: req.body.Password,
Name: req.body.Name
});
The above is a route to an API so it went into my index.router.js file. To perform an API action on this endpoint, I created another file for API functionality called user.api.js and it would contain something like:
async function registerUser({EMail, Password, Name}) {
// Import method from model file and perform DB action
});
As the application has grown I have come to wonder whether I have made it too complex and created an unecessary layer with a separate file of user.api.js which could possibly be refactored to be part of index.router.js.
What I do not understand is what is the standard practice of file structuring an API for scalability and should the API endpoints be in a separate file/folder of api/v1, api/v2 or should they be part of the router?
One thing that could be advantagous of having separate api files is resuability. Because it only contains functions and no routing, the functions could be reused across many different router files.

You have basically re-invented the MVC design pattern.
It's not overly complicated. It is considered good practice and encouraged.
C
Traditionally, what you call Routers is usually called the Controller. The job of the controller is simply to handle routing, handle argument parsing (request body, query parameters, is user logged in etc.) and sometimes handle validation. This is exactly what Express is designed to do. And Express allows controller functionality like authentication and validation to be refactored into middlewares.
Note: Sometimes you will see tutorials on the internet where people separate controllers and routes. My personal recommendation is do not do this. Express routing has been designed to be perfect for writing controllers. About the ONLY reason to separate them is if you have two different URLs that do the exact same thing. In my opinion that is better handled by a redirect.
M
Traditionally what you call API is called the Model. The model is your traditional collection of objects or data structures that you learned to program with. The model is what performs the application logic. Normally classes or modules that implement models are not labeled with anything. For example a user model would not be called UserAPI or UserModel but simply called User. However, what you name things is just a convention. Stick with what makes sense to you.
V
The final part of MVC is the View. In Express the view is simply res.json() or res.render() with its associated HTML template. This part is 99% already written by Express developers - you just need to tell the view functions what to send to the front-end.
Your architecture is good
There are very good reasons for separating the model (API) from the controller (router). First is that it allows you to solve your problems without polluting your core logic with parameter parsing logic. Your models should not need to worry about weather the user is logged in or how data is passed to it.
Second is that it allows you to use your model (API) logic outside of Express. The most obvious use for this is unit testing. This allows you to unit test your core logic without the web parts of the code. I also usually write utility scripts that I can use to do things like create a new user, dump user data, generate authentication token so I can use it with Postman etc.
For example you can create a script like:
#! /usr/bin/env node
// register-user.js
import UserAPI from '../api/v1/user.api.js'
UserAPI.registerUser({
EMail: process.argv[2],
Password: process.argv[4],
Name: process.argv[3]
})
.then(x => {console.log(x); process.exit()})
.catch(console.error);
Which you can then execute on the command line to create new users without needing to run the server:
$ ./register-user.js myemail#address.com 'My Name' 123456
It looks like your software is already structured according to MVC. Keep it that way. It will make maintaining and modifying the software a little easier.

API and router are two different things and even from different worlds.
Most of the applications are composed of two basic building blocks, UIs and APIs. UIs is supposed to be for humans and APIs for machines. You can have 0-n UIs and 0-n APIs. There is no rule for that.
For example UI you might have a webpage for common visitiors, application for paying visitors or those who bought your product and application for administrators. Those are three separate UIs. If one is malfunctioning, others are working.
The other example are APIs. There can be separate API for each of these UI. Also there can be APIs for third parties or other microservices of your own system. And again, if one API doesn't work, it doesn't impact others.
Router is the design pattern for URL control. Or a specific implementation of the design pattern if you wish.
While both UIs and APIs might require URL control, the same pattern can be used in both parts of the application system. But it doesn't mean it should be in the same folder. Things for building blocks should be in separate folders. Router, unlike the model, is not a common thing you would share among building blocks.
The folder structure you present here, Im sure you have seen it somewhere on the interent. But I do not consider it mature. I would consider something like that more mature:
├── model
├── website-ui
│ ├── routers
│ ├── templates
│ └── index.js // this is router
├── website-api
│ ├── user
│ │ └── index.js // this might be a router (API dependent
│ ├── item
│ │ └── index.js // this might be a router
│ └── index.js // this is router
├── index.js
└── main-router.js // this is router
Usually you don't even do main-router like that, because this responsibility goes often to load balancer outside of Nodejs. But everyone must start somewhere. This implementation is easy to upgrade later.
Do not confuse multiple APIs with API versions. In the best case scenario, you never want API versions, ever. You would proceed like this:
├── website-api
│ ├── user-old
│ │ └── index.js
│ ├── user
│ │ └── index.js
│ ├── item
│ │ └── index.js
│ ├── index-v1.js
│ └── index-v2.js
or that:
├── website-api-v1
│ ├── user
│ │ └── index.js
│ ├── item
│ │ └── index.js
│ └── index.js
├── website-api-v2
│ ├── user
│ │ └── index.js
│ ├── item
│ │ └── index.js
│ └── index.js
You can't tell now. And you shouldn't care. When you do changes in API, you do them backward compatible. If you can't do that anymore, it means you have done some critical mistakes in the past or large business changes comes in. This is not predictable.
One thing that could be advantagous of having separate api files is resuability.
Yes, but you can't tell now.
Regarding to your other questions. Stick to the SOLID rinciples. Each part of the code should have one specific purpose.
When I see your router folder, I have no idea what is there. Well I know routers, but routers of what? There can be everything, yet nothing. Thus not easily extensible = bad.
When I look at my structure, I can more easily predict what is in there.
You should design your architecture regarding to the purpose and not the specific implementations. Lets say you have two APIs because you have two purposes. Are they both REST or GraphQL? Can I share code and remove duplicities? Not that important. Sharing code is actually very dangerous if not done properly. Shared code is the worst part to refactor and it often doesn't provide as many advantages.
... I have come to wonder whether I have made it too complex...
Depends, is it a 14 days project? Yes you did. Is it for a year or more? You should go even deeper.

Related

How to enforce deployment order with Complex TurboRepo requirements

Is there a recommended way to enforce deployment order via specific apps using TurboRepo? I know you can specify that all child dependents run first, but that results in undesired behavior in my scenario.
Here is an example of my file structure:
├── apps
│ ├── backend
│ └── web
├── packages
│ ├── assets
│ ├── config
│ ├── design-system
│ ├── hooks
│ └── utils
And here is the command I'm running to deploy:
yarn turbo run deploy:ci --filter=...[origin/main] --dry-run
In my scenario, I'd like my apps/backend to deploy before apps/web because web relies on output from the backend. I thought about using the following turbo.json:
{
"$schema": "https://turborepo.org/schema.json",
"baseBranch": "origin/main",
"pipeline": {
"deploy:ci": {
"dependsOn": ["^deploy:ci"],
"outputs": [".sst/**", ".build/**", ".expo/**"]
}
}
}
However, while this works if I add backend as a devDependency of web, it also results in backend always being rebuilt (even when none of its dependencies have changed). This is because if I change packages/hooks (which backend does not rely on), it will try to deploy packages/utils because hooks uses the utils package. This waterfalls and causes it to try to deploy backend because backend uses utils.
I'd also like to note that only the apps/* contain deploy:ci methods, so there is really no need for it to try to deploy changes to any package/* dependencies.
My end goal would look like the following:
Change packages/hooks
Detect change in packages/hooks and trigger deploy:ci for apps/web (which has hooks as a dependency)
Or
Changes packages/utils
Detect change in packages/utils and try to deploy both apps/backend and apps/web because they both rely on utils
I've tried replacing
"dependsOn": ["^deploy:ci"],
with
"dependsOn": [],
and this does result in only the correct packages being rebuilt, but the deploy order is willy-nilly. Ideally, I'd have this latter behavior while still enforcing backend always goes before web.

Corrected scalable structure for a big RESTful API with node.js

I need to create a large server with node.js that is as scalable as possible and suitable for serious production. In my project, I'm also using typescript to make sure my app is as scalable as possible
I have seen surfing the internet that virtually all projects are structured by grouping files for their purpose/role. Instead of all these articles, I read only twice the recommendation to structure the app with autonomous components.
So what is the best structure for great scalability?
└───src
├───bin
├───components
│ ├───Auth
│ ├───Post
│ ├───Profile
│ └───User
├───config
│ └───keys
└───database
or
└───src
├───config
│ └───components
├───controllers
├───models
│ └───plugins
├───routes
│ └───api
├───utils
└───validation
└───forms
You should use the second one, because it separates better your code and makes it more readable. You should also consider making a tests folder and a .env file

Lists all Blobs inside Azure Container with directory-level support using web front-end

I'm currently working on developing some set of codes to display all blobs inside specified Azure Container using web front-end. I'm expecting the final output to be something like this:
I started by creating a dummy storage account and populates it with some dummy files for me to play around with.
https://alicebob.blob.core.windows.net/documents
├── docx
│   ├── 201801_Discussion.docx
│   ├── 201802_Discussion.docx
├── xlsx
│   ├── 201801_Summary.xlsx
│   ├── 201802_Summary.xlsx
│   ├── 201803_Summary.xlsx
├── 201801_Review.pdf
├── 201802_Review.pdf
├── 201803_Review.pdf
To develop file listing function, I'm using Azure Storage JavaScript client library from here and put all the necessary codes (.html and .js files) in Azure Static website $web container and set index.html as Index document name and Error document path in the Static website configuration.
https://alicebob.z23.web.core.windows.net/
├── azure-storage.blob.min.js
├── azure-storage.common.min.js
├── index.html
The problem is that the function to do the listing is only either listBlobsSegmentedWithPrefix or listBlobDirectoriesSegmentedWithPrefix. So, in my case, I assume it wouldn't work straightforwardly to list all the blobs and directories in a well-structured / tree format.
My current approach is that I trick the code to keep using listBlobDirectoriesSegmentedWithPrefix until there is no more directory to list inside, then continue to list using listBlobsSegmentedWithPrefix
So far I'm quite satisfied that my code can list all the Blobs at the leaf-level and also list all the directories if it isn't on the leaf-level. You can take a look at the blob listing here and feel free to go for 'View Source' to see the codes I built so far.
The only problem that I face is that this set of code fails to list the Blobs if it wasn't on the leaf-level. For example, it fails to list these blobs on alicebob storage account:
├── 201801_Review.pdf
├── 201802_Review.pdf
├── 201803_Review.pdf
This is an expected issue as I'm not running listBlobsSegmentedWithPrefix if it isn't on the leaf-level. The reason is that it will produces the output with something like this which isn't what I want:
├── docx/201801_Discussion.docx
├── docx/201802_Discussion.docx
├── xlsx/201801_Summary.xlsx
├── xlsx/201802_Summary.xlsx
├── xlsx/201803_Summary.xlsx
├── 201801_Review.pdf
├── 201802_Review.pdf
├── 201803_Review.pdf
Any suggestion on how to overcome this issue? The real implementation would involves a huge amount of data so I think a simple if-then-else wouldn't be efficient on this case.
sorry for the long description but I just want to describe my problem as clear as possible :)
There's an option called delimiter when listing blobs. Let's get down to code.
blobService.listBlobsSegmentedWithPrefix('documents',null,null,{delimiter:'/'},(error,result,response)=>{
console.log(result);
console.log(response.body.EnumerationResults.Blobs.BlobPrefix);
})
With delimiter /, listing operation returns results of two parts.
result, contains info of blobs under the root directory of container, e.g. 201801_Review.pdf, etc. in your case.
BlobPrefix in response body, contains directory names of single level with delimiter.
[ { Name: 'docx/' }, { Name: 'xlsx/' } ]
Use BlobPrefix as prefix, we can continue listing content of current subdirectory.
blobService.listBlobsSegmentedWithPrefix('documents','docx/',null,{delimiter:'/'},(error,result,response)=>{
console.log(result);
console.log(response.body.EnumerationResults.Blobs.BlobPrefix);
})
Basically point 1 result is enough, you don't necessarily have to use BlobPrefix to refactor your code. See more info in section Using a Delimiter to Traverse the Blob Namespace of list blobs.
You can also do this with out the overhead of the whole storage api using a fetch request as follows.
fetch("https://cvworkshop.blob.core.windows.net/telaviv-bw/?restype=container&comp=list")
.then(response => response.text())
.then(str => new window.DOMParser().parseFromString(str, "text/xml"))
.then(data => console.log(data));

Nodejs: Good practice to just use the index.js to EXPORTS?

I am seeing a pattern on some code I have inherited. Each directory has its JS file but there is also a index.js that actually exports items from the other JS file or files.
I presume this is done so you can see exactly what you are exporting, as the main exports are in index.js and the main code is in the other js file or files.
Is this correct? What is this pattern called ?
Should I continue using this pattern.
Let's say I have the following directory structure:
MyApp
├── app.js
├── test.js
├── package.json
├─┬ controllers
│ ├── index.js
│ ├── signIn.js
│ └── signOut.js
└─┬ views
├── index.js
├── signIn.js
└── signOut.js
Placing the following code inside the index.js files...
// index.js
module.exports = {
signIn: require('./signIn')
, signOut: require('./signOut')
};
...allows you to require an entire directory like...
// test.js
describe('controllers', () => {
// ~/controllers/index.js
const controllers = require('./controllers');
it('performs a sign-in', () => {
...
});
it('performs a sign-out', () => {
...
});
});
The alternative is to require each file individually.
Having an index.js in a directory is not required. You may require a file in a directory without an index.js all the same.
// app.js
const signOut = require('./controllers/signOut.js')
However, it gets tedious as your app grows. I use a package like require-directory as typing out each file in a directory is also tedious and somewhat error prone.
// index.js
module.exports = require('require-directory')(module);
/*
This yields the same result as:
module.exports = {
signIn: require('./signIn')
, signOut: require('./signOut')
, ...
};
*/
ES6 CommonJS Module syntax
Given these two common types of structures...
MyApp
│ // files divided per type (controllers, components, actions, ...)
├─┬ actions
│ ├── index.js
│ ├── signIn.js
│ └── signOut.js
├─┬ components ...
├─┬ reducers ...
├─┬ pages ...
│
│ // files divided per component
├─┬ components ...
│ ├── index.js
│ ├── SimpleComponent.jsx
│ ├── AnotherComponent.duck.jsx // redux "duck" pattern
│ ├─┬ ComplexComponent // large complex logic, own actions, stylesheet, etc.
│ ...
├─┬ pages ...
│ ├── index.js
│ ├─┬ App
│ │ ├── index.js // not necessary here, matter of habit
│ │ ├── App.jsx
│ │ ├── actions.js
│ │ └── reducer.js
│ └─┬ Dashboard
├── another.js
...
You can simply import files in another.js like this
import {signIn, signOut} from './actions'
import {App} from './pages'
import {ComplexComponent} from './components'
instead of this (without index.js files)
import {signIn} from './actions/signIn'
import {signOut} from './actions/signOut'
import {App} from './pages/App/App' //notice the redundancy here
import {ComplexComponent} from './components/ComplexComponent/ComplexComponent'
More reading
ECMAScript 6 modules
import - JavaScript | MDN
Babel transpiler - brings the new imports to your browser now
Structuring React projects
React Redux "Ducks pattern" - a single file approach for components
The other answers provide a lot of great information, but to try and specifically answer your question 'Should I continue using this pattern", I'd say no, at least most of the time.
The thing is, this pattern requires extra effort, as you have to maintain those extra index.js files. In my experience that effort is greater than the effort to simply write one-directory-longer import statements. Plus, you can get the same functionality you'd get from having an index.js without one, by using a module like require-dir.
All that being said, if you are making a library that will be consumed by a large number of people, like a critical module in a large programming department, or a public NPM module, then the effort of an index.js becomes more justified. As long as you have enough people using your modules, your users will (cumulatively) save more time from you adding them than you will lose maintaining them.
I will directly dive into your question on whether to use this pattern or not (as other answers are not sufficient for this).
Assuming that each directory in your code represents a standalone module (doesn't rely on another module to work). Using this pattern will give these advantages:
Better and more organized imports
Separation between internal/external definitions of each module (similar to using private/public on an interface/API)
The problems with this:
It can be very tiresome to keep loose-coupling of the different modules (JS/TS is not pure OOP)
Requires active refactoring to modules definition - more circular dependencies.
Loads more code to memory (even if unused) - though I'm not sure how bad this can be as there are optimizations that usually fix this problem when bundling production code.
Circular dependencies are very problematic, importing the whole module/directory using index.js will import all of its parts (that are declared in index.js) so if you have:
-- moduleA
├-- comp1A // needs comp1B
├-- comp2A
└-- index.js // export both comp1/2
-- moduleB
├-- comp1B
├-- comp2B // needs comp2A
└-- index.js // export both comp1/2
Example case - comp1A needs something from comp1B while comp2B needs something from comp2A
When importing the specific files (without index.js - import something from './moduleB/comp1B') you won't have circular dependencies.
But if you use index.js (import something from './moduleB') you will have circular dependencies.
My recommendation is to use index.js in the right places, and to keep those maintained! Using index.js with small modules will be perfect, but with time they will grow and should be divided. index.js is very bad to use in common/shared/utils/misc/core (whatever you call it when you want to put uncategorized and unrelated code that is used across your whole project) module.
What about this?
module.exports = {
...require('./moduleA'),
...require('./moduleB')
}
(moduleA.a will be overridden by moduleB.a)

Watch for bridge-related events using ARI

I am trying to use Asterisk ARI to watch for bridge-related events. I am using Asterisk 13.6.0.
Specifically, I want to know when a bridge has been created or destroyed, and when a user (channel) has joined or left the bridge. On my server, bridges are created dynamically when someone dials in, and destroyed automatically when the last member leaves the bridge.
Using the node-ari-client library from the Asterisk project, and following some of their example code, this is what I have so far.
var client = require("ari-client");
var util = require("util");
client.connect("http://localhost:8088", "username", "password")
.then(function (ari) {
ari.once("StatisStart", channelJoined);
function channelJoined (event, incoming) {
incoming.on("BridgeCreated", function(event, bridge) {
console.log(util.format("Bridge created: %s", bridge.id));
});
incoming.on("BridgeDestroyed", function(event, bridge) {
console.log(util.format("Bridge destroyed: %s", bridge.id));
});
incoming.on("ChannelEnteredBridge", function(event, channel) {
console.log(util.format("Bridge was joined by: %s", channel.id));
});
incoming.on("ChannelLeftBridge", function(event, channel) {
console.log(util.format("Bridge was joined by: %s", channel.id));
});
}
ari.start("bridge-watcher");
})
.done();
I expected that the .on() handlers would print to the console when the various events occurred. However, calling into a bridge, leaving a bridge, nothing is ever printed to console.
If it matters, here's the output of npm ls showing which versions I'm using. Node is v0.10.36.
├─┬ ari-client#0.5.0
│ ├── backoff-func#0.1.2
│ ├── bluebird#2.9.34
│ ├── node-uuid#1.4.1
│ ├─┬ swagger-client#2.0.26
│ │ ├── btoa#1.1.1
│ │ └─┬ shred#0.8.10
│ │ ├── ax#0.1.8
│ │ ├── cookiejar#1.3.1
│ │ ├── iconv-lite#0.2.11
│ │ └── sprintf#0.1.1
│ ├── underscore#1.6.0
│ └─┬ ws#0.4.31
│ ├── commander#0.6.1
│ ├── nan#0.3.2
│ ├── options#0.0.5
│ └── tinycolor#0.0.1
├── bluebird#3.1.1
└─┬ util#0.10.3
└── inherits#2.0.1
Specifically, I want to know when a bridge has been created or
destroyed, and when a user (channel) has joined or left the bridge. On
my server, bridges are created dynamically when someone dials in, and
destroyed automatically when the last member leaves the bridge.
Remember: the primary purpose of ARI is to build your own dialplan applications, not to monitor the entirety of Asterisk. As such, by default, your external application is not subscribed to the resources in Asterisk. As the Channels in a Stasis Application section explains:
Resources in Asterisk do not, by default, send events about themselves to a connected ARI application. In order to get events about resources, one of three things must occur:
The resource must be a channel that entered into a Stasis dialplan application. A subscription is implicitly created in this case. The subscription is implicitly destroyed when the channel leaves the Stasis dialplan application.
While a channel is in a Stasis dialplan application, the channel may interact with other resources - such as a bridge. While channels interact with the resource, a subscription is made to that resource. When no more channels in a Stasis dialplan application are interacting with the resource, the implicit subscription is destroyed.
At any time, an ARI application may make a subscription to a resource in Asterisk through application operations. While that resource exists, the ARI application owns the subscription.
If you're expecting to get events automatically for resources in Asterisk that channels are using outside of the bridge-watcher application, you won't get them unless you do one of two things:
Explicitly subscribe to the resources using the applications resource. This works well for resources that are relatively static and/or long lived, such as Endpoints, static Bridges (such as those used for Conferences), Mailboxes, and Device States. It does not work well for transitory resources.
In Asterisk 13.6.0 and later, you can now subscribe to all event sources when you connect your WebSocket. In node-ari-client, you would do the following:
ari.start(bridge-watcher, true);
You should note however that even when you are subscribed to all resources, you don't explicitly own them. You merely can watch them all automatically. The notion of ownership is very important in ARI, particularly as it pertains to what you can and cannot do to channels, and when. The wiki pages I've linked provide some reasonable documentation for how this works.

Categories

Resources