ERR_MODULE_NOT_FOUND: Cannot find Module Error - javascript

Problem Summary
I am working on the backend of a MERN stack app that pulls restaurant data from a mongo collection. When I run nodemon server inside of the backend folder, I obtain the error ERR_MODULE_NOT_FOUND:
[nodemon] 2.0.16
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node server index.js`
node:internal/errors:466
ErrorCaptureStackTrace(err);
^
Error [ERR_MODULE_NOT_FOUND]: Cannot find module '[redacted]/backend/dao/restaurants.DAO' imported from [redacted]/backend/api/restaurants.controller.js
at new NodeError (node:internal/errors:377:5)
at finalizeResolution (node:internal/modules/esm/resolve:405:11)
at moduleResolve (node:internal/modules/esm/resolve:966:10)
at defaultResolve (node:internal/modules/esm/resolve:1174:11)
at ESMLoader.resolve (node:internal/modules/esm/loader:605:30)
at ESMLoader.getModuleJob (node:internal/modules/esm/loader:318:18)
at ModuleWrap.<anonymous> (node:internal/modules/esm/module_job:80:40)
at link (node:internal/modules/esm/module_job:78:36) {
code: 'ERR_MODULE_NOT_FOUND'
}
Node.js v18.2.0
[nodemon] app crashed - waiting for file changes before starting...
Problem Details
My Folder Structure
My folder structure looks like this
restaurant.route.js
import express from "express";
import RestaurantsCtrl from "./restaurants.controller.js";
// create router instance
const router = express.Router();
// respond to GET requests with hello
router.route("/").get(RestaurantsCtrl.apiGetRestaurants);
export default router;
restaurants.controller.js
import RestaurantsDAO from "../dao/restaurants.DAO";
export default class RestaurantsCtrl {
static async apiGetRestaurants(req, res, next) {
// if there is a number of restaurants to show per page passed in the URL, conver it to an integer. Otherwise set it to 20
const restaurantsPerPage = req.query.restaurantsPerPage ? parseInt(req.query.restaurantsPerPage, 10) : 20;
// if there is a page number passed in the URL, conver it to an integer. Otherwise set it to 0
const page = req.query.page ? parseInt(req.query.page, 10) : 0;
let filters = {};
// if we see filters passed in the URL, add it to the filters
if (req.query.cuisin) {
filters["cuisine"] = req.query.cuisine;
}
else if (req.query.zipcode) {
filters["zipcode"] = req.query.zipcode;
}
else if (req.query.name) {
filters["name"] = req.query.name;
}
// get the list of restaurants and a number of restaurants from the database
const { restaurantsList, totalNumRestaurants } = await RestaurantsDAO.getRestaurants({
filters,
page,
restaurantsPerPage,
});
// create a response object to send back to the client
let response = {
restaurants: restaurantsList,
page: page,
filters: filters,
entries_per_page: restaurantsPerPage,
total_results: totalNumRestaurants,
}
// send the response back to the client
res.json(response);
}
}
server.js
import express from "express";
import cors from "cors";
import restaurants from "./api/restaurants.route.js";
// Create a new express application instance
const app = express();
// apply middleware
app.use(cors());
// parse request body as JSON. Our server can accept JSON data in the body of a request
app.use(express.json());
// specify some routes. This is the path that will be used to access the API
app.use("/api/v1/restaurants", restaurants);
// If someone goes to a path that doesn't exist, return a 404 error
app.use("*", (req, res) => res.status(404).json({error : "Not found"}));
// export the application instance for use in the rest of the application
export default app
index.js
import app from "./server.js"
import mongodb from "mongodb";
import dotenv from "dotenv";
import RestaurantsDAO from "./dao/restaurants.DAO.js";
// Load environment variables from .env file, where API keys and passwords are configured
dotenv.config();
// Get access to the mongo client
const MongoClient = mongodb.MongoClient;
// set port
const port = process.env.PORT || 8000;
// Connect to the database
const a = MongoClient.connect(
// The URL of the database to connect to
process.env.RESTREVIEWS_DB_URI,
{
// The options to use when connecting to the database
maxPoolSize: 50,
wtimeoutMS: 2500,
useNewUrlParser: true,
}
)
// If the connection is not successful, throw an error
.catch(err => {
console.log(err.stack);
process.exit(1);
})
// If the connection is successful, console log a message
.then(async client => {
// Get a reference to the database
await RestaurantsDAO.injectDB(client);
app.listen(port, () => {
console.log(`Server listening on port ${port}`);
});
});

I found the solution to this problem.
I needed to add .js at the end of restaurants.DAO in the line import RestaurantsDAO from "../dao/restaurants.DAO";
of the file restaurants.controller.js

Related

Displaying my own CSV Data with my own model, Forge autodesk

I hope you all are well. I am having trouble with displaying my own forge data in the AutoDesk Forge reference application. My current .env file is as follows. However, whenever I launch it in http://localhost:9000/upload all I get in return is a blank empty screen.
FORGE_CLIENT_ID=STEHw2Qx... marked ...xrIJUeKRj6 #changed for post
FORGE_CLIENT_SECRET=A54... marked ...c348a #changed for post
FORGE_ENV=AutodeskProduction
FORGE_API_URL=https://developer.api.autodesk.com
FORGE_CALLBACK_URL=http://localhost:9000/oauth/callback
FORGE_BUCKET=cosmostool1.cosmosengineering.es #changed for post
ENV=local
#ADAPTER_TYPE=local
## Connect to Azure IoTHub and Time Series Insights
# ADAPTER_TYPE=azure
# AZURE_IOT_HUB_CONNECTION_STRING=
# AZURE_TSI_ENV=
#
## Azure Service Principle
# AZURE_CLIENT_ID=
# AZURE_APPLICATION_SECRET=
#
## Path to Device Model configuration File
# DEVICE_MODEL_JSON=
## End - Connect to Azure IoTHub and Time Series Insights
ADAPTER_TYPE=csv
CSV_MODEL_JSON=server/gateways/synthetic-data/device-models.json
CSV_DEVICE_JSON=server/gateways/synthetic-data/devices.json
CSV_DATA_END=2011-02-20T13:51:10.511Z #Format: YYYY-MM-DDTHH:MM:SS.000Z
CSV_DELIMITER="\t"
CSV_LINE_BREAK="\n"
CSV_TIMESTAMP_COLUMN="time"
if (process.env.ENV == "local") {
require("dotenv").config({
path: __dirname + "/../.env",
});
}
Because of this line at forge-dataviz-iot-reference-app/server/router/Index.js#L25, you must specify ENV=local before executing npm run dev. Otherwise, it won't read the content of .env.
if (process.env.ENV == "local") {
require("dotenv").config({
path: __dirname + "/../.env",
});
}
Or you can just change it to the below
require("dotenv").config({
path: __dirname + "/../.env",
});
Install dotenv
npm install dotenv
Create a config.js file in your directory and add the following code;
const dotenv = require('dotenv');
dotenv.config();
module.exports = {
// Set environment variables or hard-code here
azure: {
azure_conn_string: process.env.AZURE_IOT_HUB_EVENT_HUB_CONNECTION_STRING
}
};
Update your localserver.js file
const { app, router } = require("./app.js");
const config = require('./config');
app.use(router);
const server = require("http").createServer(app);
if (config.azure.azure_conn_string) {
require("./RealTimeApi.js").createSocketIOServer(server);
}
const PORT = process.env.PORT || 9000;
async function start() {
try { server.listen(PORT, () => { console.log(`localhost: ${PORT}`); }); } catch (error) { console.log(error); }
} start();

starting a next server on cpanel throwing 503 service unavailable

I'm attempting to deploy a NextJS app on my shared hosting server using the cPanel Setup Node.JS App section, but when I start the build - despite getting ready on http://localhost:3000 - the site throws a 503 error.
I've uploaded the build folder alongside the next.config.js, package-lock.json, package.json and server.js to the application root, and this is my current file structure:
next_main
build (.next folder)
node_modules
next.config.js
package-lock.json
package.json
server.js
This is my server.js file (exactly the same as what Next provided in their custom server docs):
const { createServer } = require("http");
const { parse } = require("url");
const next = require("next");
const dev = process.env.NODE_ENV !== "production";
const hostname = "localhost";
const port = 3000;
const app = next({ dev, hostname, port });
const handle = app.getRequestHandler();
app.prepare().then(() => {
createServer(async (request, response) => {
try{
const parsedURL = parse(request.url, true);
const { pathname, query } = parsedURL;
switch(pathname){
case "/a":
case "/b":
await app.render(request, response, pathname, query);
break;
default:
await handle(request, response, parsedURL);
}
} catch(error){
console.error("Error occurred.", request.url, error);
response.statusCode = 500;
response.end("Internal server error.");
}
}).listen(port, error => {
if(error) throw error;
console.log(`> Ready on http://${hostname}:${port}`);
});
}).catch(error => {
if(error) throw error;
});
Failed to load next.config.js was also output in my stderr file, despite next.config.js being provided.
I've attached the current settings I have applied in my cPanel.
Please note that I do not have root access to the terminal, and am restricted to the next_main environment when running any NPM scripts.
Make sure you add all environmental variables in the .env file. add your variable here

404 - File or directory not found in Next JS

I am making a next js application.
Deployment works fine in vercel.
For deploying the same project in another server, got help from https://stackoverflow.com/a/63660079/13270726 and used the same instructions in our app.
Deployed the out directory into server using ftp client.
Issue
-> When we enter into http://your-domain.com , it works fine. (Even page refresh also works fine in this page)
-> If we move to about page using the url, http://your-domain.com/about then it also works but on page refresh in the url http://your-domain.com/about results in the error,
-> This page refresh also results in the console error like,
Get http://your-domain.com/about Not found
next.config.js: (With public path)
const config = {
webpack: (config, { isServer }) => {
.
.
.
config.devServer = {
historyApiFallback: true
}
config.output.publicPath = "/"
return config;
}
}
module.exports = withPlugins(config);
The issue arises in page refresh only or when we manually type the url.. But while we navigate to it first time then the issue is not there.
Any good help would be much appreciated as I am stuck for long time..
Edit:
I have a server.js file and its code look like,
const dotenv = require("dotenv");
// import ENVs from ".env.local" and append to process
dotenv.config({ path: ".env.local" });
const express = require("express");
const address = require("address");
const chalk = require("chalk");
// create express web server instance
const app = express();
// pull out ENVs from process
const { LOCALHOST, PORT } = process.env;
// get the Local IP address
const LOCALIP = address.ip();
// tell express to serve up production assets from the out directory
app.use(express.static("out" + '/'));
app.get('/*', (req, res) => {
res.send('ok')
});
app.all('*', function(req, res) {
res.redirect('/index.html');
});
// tell express to listen for incoming connections on the specified PORT
app.listen(PORT, (err) => {
if (!err) {
// log the LOCALHOST and LOCALIP addresses where the app is running
console.log(
`\n${chalk.rgb(7, 54, 66).bgRgb(38, 139, 210)(" I ")} ${chalk.blue(
"Application is running at"
)} ${chalk.rgb(235, 220, 52).bold(LOCALHOST)} ${chalk.blue(
"or"
)} ${chalk.rgb(235, 220, 52).bold(`http://${LOCALIP}:${PORT}`)}\n`
);
} else {
console.err(`\nUnable to start server: ${err}`);
}
});

IBM Watson WebSocket Connection failure. HTTP authentication failed; no valid credentials avaliable

I am working on a speech-to-text web app using the IBM Watson Speech to text API. The API is fetched on the click of a button. But whenever I click the button. I get the above-mentioned error. I Have stored my API key and URL in a .env file.
I tried a lot but keep on getting this error. Please Help me out as I am new to all this.
I got server.js from the Watson Github Repo
Server.js
'use strict';
/* eslint-env node, es6 */
const env = require('dotenv');
env.config();
const express = require('express');
const app = express();
const AuthorizationV1 = require('watson-developer-cloud/authorization/v1');
const SpeechToTextV1 = require('watson-developer-cloud/speech-to-text/v1');
const TextToSpeechV1 = require('watson-developer-cloud/text-to-speech/v1');
const vcapServices = require('vcap_services');
const cors = require('cors');
// allows environment properties to be set in a file named .env
// on bluemix, enable rate-limiting and force https
if (process.env.VCAP_SERVICES) {
// enable rate-limiting
const RateLimit = require('express-rate-limit');
app.enable('trust proxy'); // required to work properly behind Bluemix's reverse proxy
const limiter = new RateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
delayMs: 0 // disable delaying - full speed until the max limit is reached
});
// apply to /api/*
app.use('/api/', limiter);
// force https - microphone access requires https in Chrome and possibly other browsers
// (*.mybluemix.net domains all have built-in https support)
const secure = require('express-secure-only');
app.use(secure());
}
app.use(express.static(__dirname + '/static'));
app.use(cors())
// token endpoints
// **Warning**: these endpoints should probably be guarded with additional authentication & authorization for production use
// speech to text token endpoint
var sttAuthService = new AuthorizationV1(
Object.assign(
{
iam_apikey: process.env.SPEECH_TO_TEXT_IAM_APIKEY, // if using an RC service
url: process.env.SPEECH_TO_TEXT_URL ? process.env.SPEECH_TO_TEXT_URL : SpeechToTextV1.URL
},
vcapServices.getCredentials('speech_to_text') // pulls credentials from environment in bluemix, otherwise returns {}
)
);
app.use('/api/speech-to-text/token', function(req, res) {
sttAuthService.getToken(function(err, token) {
if (err) {
console.log('Error retrieving token: ', err);
res.status(500).send('Error retrieving token');
return;
}
res.send(token);
});
});
const port = process.env.PORT || process.env.VCAP_APP_PORT || 3002;
app.listen(port, function() {
console.log('Example IBM Watson Speech JS SDK client app & token server live at http://localhost:%s/', port);
});
// Chrome requires https to access the user's microphone unless it's a localhost url so
// this sets up a basic server on port 3001 using an included self-signed certificate
// note: this is not suitable for production use
// however bluemix automatically adds https support at https://<myapp>.mybluemix.net
if (!process.env.VCAP_SERVICES) {
const fs = require('fs');
const https = require('https');
const HTTPS_PORT = 3001;
const options = {
key: fs.readFileSync(__dirname + '/keys/localhost.pem'),
cert: fs.readFileSync(__dirname + '/keys/localhost.cert')
};
https.createServer(options, app).listen(HTTPS_PORT, function() {
console.log('Secure server live at https://localhost:%s/', HTTPS_PORT);
});
}
App.js
import React, {Component} from 'react';
import 'tachyons';
//import WatsonSpeech from 'ibm-watson';
var recognizeMic = require('watson-speech/speech-to-text/recognize-microphone');
class App extends Component {
onListenClick = () => {
fetch('http://localhost:3002/api/speech-to-text/token')
.then(function(response) {
return response.text();
}).then(function (token) {
var stream = recognizeMic({
token: token, // use `access_token` as the parameter name if using an RC service
objectMode: true, // send objects instead of text
extractResults: true, // convert {results: [{alternatives:[...]}], result_index: 0} to {alternatives: [...], index: 0}
format: false // optional - performs basic formatting on the results such as capitals an periods
});
stream.on('data', function(data) {
console.log('error 1')
console.log(data);
});
stream.on('error', function(err) {
console.log('error 2')
console.log(err);
});
//document.querySelector('#stop').onclick = stream.stop.bind(stream);
}).catch(function(error) {
console.log('error 3')
console.log(error);
});
}
render() {
return(
<div>
<h2 className="tc"> Hello, and welcome to Watson Speech to text api</h2>
<button onClick={this.onListenClick}>Listen to Microphone</button>
</div>
);
}
}
export default App
Since the only code you show is fetching an authorisation token then I guess that that is what is throwing the authentication failure. I am not sure how old the code you are using is, but the mechanism you are using was used when the STT service credentials are userid / password. The mechanism became unreliable when IAM keys started to be used.
Your sample is still using watson-developer-cloud, but that has been superseded by ibm-watson. As migrating the code to ibm-watson will take a lot of rework, you can continue to use watson-developer-cloud.
If do you stick with watson-developer-cloud and you want to get hold of a token, with an IAM Key then use:
AuthIAMV1 = require('ibm-cloud-sdk-core/iam-token-manager/v1'),
...
tokenService = new AuthIAMV1.IamTokenManagerV1({iamApikey : apikey});
...
tokenService.getToken((err, res) => {
if (err) {
...
} else {
token = res;
...
}
});

Mocha API Testing: getting 'TypeError: app.address is not a function'

My Issue
I've coded a very simple CRUD API and I've started recently coding also some tests using chai and chai-http but I'm having an issue when running my tests with $ mocha.
When I run the tests I get the following error on the shell:
TypeError: app.address is not a function
My Code
Here is a sample of one of my tests (/tests/server-test.js):
var chai = require('chai');
var mongoose = require('mongoose');
var chaiHttp = require('chai-http');
var server = require('../server/app'); // my express app
var should = chai.should();
var testUtils = require('./test-utils');
chai.use(chaiHttp);
describe('API Tests', function() {
before(function() {
mongoose.createConnection('mongodb://localhost/bot-test', myOptionsObj);
});
beforeEach(function(done) {
// I do stuff like populating db
});
afterEach(function(done) {
// I do stuff like deleting populated db
});
after(function() {
mongoose.connection.close();
});
describe('Boxes', function() {
it.only('should list ALL boxes on /boxes GET', function(done) {
chai.request(server)
.get('/api/boxes')
.end(function(err, res){
res.should.have.status(200);
done();
});
});
// the rest of the tests would continue here...
});
});
And my express app files (/server/app.js):
var mongoose = require('mongoose');
var express = require('express');
var api = require('./routes/api.js');
var app = express();
mongoose.connect('mongodb://localhost/db-dev', myOptionsObj);
// application configuration
require('./config/express')(app);
// routing set up
app.use('/api', api);
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('App listening at http://%s:%s', host, port);
});
and (/server/routes/api.js):
var express = require('express');
var boxController = require('../modules/box/controller');
var thingController = require('../modules/thing/controller');
var router = express.Router();
// API routing
router.get('/boxes', boxController.getAll);
// etc.
module.exports = router;
Extra notes
I've tried logging out the server variable in the /tests/server-test.js file before running the tests:
...
var server = require('../server/app'); // my express app
...
console.log('server: ', server);
...
and I the result of that is an empty object: server: {}.
You don't export anything in your app module. Try adding this to your app.js file:
module.exports = server
It's important to export the http.Server object returned by app.listen(3000) instead of just the function app, otherwise you will get TypeError: app.address is not a function.
Example:
index.js
const koa = require('koa');
const app = new koa();
module.exports = app.listen(3000);
index.spec.js
const request = require('supertest');
const app = require('./index.js');
describe('User Registration', () => {
const agent = request.agent(app);
it('should ...', () => {
This may also help, and satisfies #dman point of changing application code to fit a test.
make your request to the localhost and port as needed
chai.request('http://localhost:5000')
instead of
chai.request(server)
this fixed the same error message I had using Koa JS (v2) and ava js.
The answers above correctly address the issue: supertest wants an http.Server to work on. However, calling app.listen() to get a server will also start a listening server, this is bad practice and unnecessary.
You can get around by this by using http.createServer():
import * as http from 'http';
import * as supertest from 'supertest';
import * as test from 'tape';
import * as Koa from 'koa';
const app = new Koa();
# add some routes here
const apptest = supertest(http.createServer(app.callback()));
test('GET /healthcheck', (t) => {
apptest.get('/healthcheck')
.expect(200)
.expect(res => {
t.equal(res.text, 'Ok');
})
.end(t.end.bind(t));
});
Just in case, if someone uses Hapijs the issue still occurs, because it does not use Express.js, thus address() function does not exist.
TypeError: app.address is not a function
at serverAddress (node_modules/chai-http/lib/request.js:282:18)
The workaround to make it work
// this makes the server to start up
let server = require('../../server')
// pass this instead of server to avoid error
const API = 'http://localhost:3000'
describe('/GET token ', () => {
it('JWT token', (done) => {
chai.request(API)
.get('/api/token?....')
.end((err, res) => {
res.should.have.status(200)
res.body.should.be.a('object')
res.body.should.have.property('token')
done()
})
})
})
Export app at the end of the main API file like index.js.
module.exports = app;
We had the same issue when we run mocha using ts-node in our node + typescript serverless project.
Our tsconfig.json had "sourceMap": true . So generated, .js and .js.map files cause some funny transpiling issues (similar to this). When we run mocha runner using ts-node. So, I will set to sourceMap flag to false and deleted all .js and .js.map file in our src directory. Then the issue is gone.
If you have already generated files in your src folder, commands below would be really helpful.
find src -name ".js.map" -exec rm {} \;
find src -name ".js" -exec rm {} \;
I am using Jest and Supertest, but was receiving the same error. It was because my server takes time to setup (it is async to setup db, read config, etc). I needed to use Jest's beforeAll helper to allow the async setup to run. I also needed to refactor my server to separate listening, and instead use #Whyhankee's suggestion to create the test's server.
index.js
export async function createServer() {
//setup db, server,config, middleware
return express();
}
async function startServer(){
let app = await createServer();
await app.listen({ port: 4000 });
console.log("Server has started!");
}
if(process.env.NODE_ENV ==="dev") startServer();
test.ts
import {createServer as createMyAppServer} from '#index';
import { test, expect, beforeAll } from '#jest/globals'
const supertest = require("supertest");
import * as http from 'http';
let request :any;
beforeAll(async ()=>{
request = supertest(http.createServer(await createMyAppServer()));
})
test("fetch users", async (done: any) => {
request
.post("/graphql")
.send({
query: "{ getQueryFromGqlServer (id:1) { id} }",
})
.set("Accept", "application/json")
.expect("Content-Type", /json/)
.expect(200)
.end(function (err: any, res: any) {
if (err) return done(err);
expect(res.body).toBeInstanceOf(Object);
let serverErrors = JSON.parse(res.text)['errors'];
expect(serverErrors.length).toEqual(0);
expect(res.body.data.id).toEqual(1);
done();
});
});
Edit:
I also had errors when using data.foreach(async()=>..., should have use for(let x of... in my tests

Categories

Resources