Can't connect Redis server to nodejs, Docker compose - javascript

I'm struggling to connect a redis deployment to my nodejs app. Of course locally without the use of docker, it works well, so I'm at odds as to whether this is an issue to do with my code, or the way I've set up my docker compose file
Dockerfile:
FROM node:8
WORKDIR /app
COPY package.json /app
COPY . /app
RUN npm install
CMD ["npm", "start"]
EXPOSE 3000
docker-compose.yml
version: "3"
services:
web:
container_name: web-container
restart: always
depends_on:
- redis
build: .
ports:
- "3000:3000"
links:
- redis
redis:
container_name: redis-container
image: "redis:latest"
ports:
- "6379:6379"
volumes:
- ./data:/data
Redis Connection File (RedisService.js)
const redis = require("redis");
const client = redis.createClient();
const DbUtils = require("../../db_utils");
const {promisify} = require("util");
const getAsync = promisify(client.get).bind(client);
const existsAsync = promisify(client.exists).bind(client);
class RedisCache {
constructor () {
var connected;
// * Initiliase the connection to redis server
client.on("connect", () => {console.log("📒 Redis cache is ready"); connected = true;})
client.on("error", (e) => {console.log("Redis cache error:\n" + e); connected = false;});
}
async setData (id, data) {
// * Stringify data if it's an object
data = data instanceof Object ? JSON.stringify(data) : data;
client.set(id, data);
return true;
}
async getData (key) {
return getAsync(key).then(data => {
data = JSON.parse(data) instanceof Object ? JSON.parse(data) : data;
return data;
})
}
async exists (key) {
return existsAsync(key).then(bool => {
return bool;
})
}
// Returns status of redis cache
async getStatus () {
return this.connected;
}
}
module.exports = new RedisCache();
ERROR
Error: Redis connection to 127.0.0.11:6379 failed - connect ECONNREFUSED 127.0.0.11:6379

When you run your containers via docker-compose they are all connected to a common network. Service name is a DNS name of given container so to access redis container from web you should create the client like :
const client = redis.createClient({
port : 6379,
host : 'redis'
});
You have not configured the host so it uses the default one - 127.0.0.1. But from the point of view of your web container the redis is not running on the localhost. Instead it runs in it's own container which DNS name is redis.

The beginning (docker part) of this tutorial worked for me :
https://medium.com/geekculture/using-redis-with-docker-and-nodejs-express-71dccd495fd3
docker run -d --name <CONTAINER_NAME> -p 127.0.0.1:6379:6379 redis
then in the node server (like in official redis website example) :
const redis = require('redis');
async function start() {
const client = redis.createClient(6379,'127.0.0.1');
await client.connect();
await client.set('mykey', 'Hello from node redis');
const myKeyValue = await client.get('mykey');
console.log(myKeyValue);
}
start();

Related

starting a next server on cpanel throwing 503 service unavailable

I'm attempting to deploy a NextJS app on my shared hosting server using the cPanel Setup Node.JS App section, but when I start the build - despite getting ready on http://localhost:3000 - the site throws a 503 error.
I've uploaded the build folder alongside the next.config.js, package-lock.json, package.json and server.js to the application root, and this is my current file structure:
next_main
build (.next folder)
node_modules
next.config.js
package-lock.json
package.json
server.js
This is my server.js file (exactly the same as what Next provided in their custom server docs):
const { createServer } = require("http");
const { parse } = require("url");
const next = require("next");
const dev = process.env.NODE_ENV !== "production";
const hostname = "localhost";
const port = 3000;
const app = next({ dev, hostname, port });
const handle = app.getRequestHandler();
app.prepare().then(() => {
createServer(async (request, response) => {
try{
const parsedURL = parse(request.url, true);
const { pathname, query } = parsedURL;
switch(pathname){
case "/a":
case "/b":
await app.render(request, response, pathname, query);
break;
default:
await handle(request, response, parsedURL);
}
} catch(error){
console.error("Error occurred.", request.url, error);
response.statusCode = 500;
response.end("Internal server error.");
}
}).listen(port, error => {
if(error) throw error;
console.log(`> Ready on http://${hostname}:${port}`);
});
}).catch(error => {
if(error) throw error;
});
Failed to load next.config.js was also output in my stderr file, despite next.config.js being provided.
I've attached the current settings I have applied in my cPanel.
Please note that I do not have root access to the terminal, and am restricted to the next_main environment when running any NPM scripts.
Make sure you add all environmental variables in the .env file. add your variable here

What is Node.js cluster best practice?

Is it better to write our server logic before forking workers or after?
I'll give two examples below to make it clear.
example #1:
const express = require("express");
const cluster = require('cluster');
const app = express();
app.get("/path", somehandler);
if (cluster.Master)
// forking workers..
else
app.listen(8000);
or example #2:
const cluster = require('cluster');
if (cluster.Master)
// forking workers..
else {
const express = require("express");
const app = express();
app.get("/path", somehandler);
app.listen(8000);
}
What is the difference?
There is no difference. Since when You call cluster.fork() it calls child_process.fork on same entry file and keeps child process handler for interprocess communication.
Read following methods defined at lines following of cluster's master module: 167, 102, 51, 52
Let's get back to Your code:
In example #1 it assigns variables, creates app instance both for master and child processes, then checks for process master or not.
In example #2 it checks for process master or not and if not it assigns vars, creates app instance and binds listener on port for child workers.
In fact it will do the same operations in clild processes:
assigning vars
creating app instance
starting listener
My own best practices using cluster is has 2 steps:
Step 1 - having custom cluster wrapper in separate module and wrapping in application call:
Have cluster.js file:
'use strict';
module.exports = (callable) => {
const
cluster = require('cluster'),
numCpu = require('os').cpus().length;
const handleDeath = (deadWorker) {
console.log('worker ' + deadWorker.process.pid + ' dead');
const worker = cluster.fork();
console.log('re-spawning worker ' + worker.process.pid);
}
process.on('uncaughtException',
(err) => {
console.error('uncaughtException:', err.message);
console.error(err.stack);
});
cluster.on('exit', handleDeath);
// no need for clustering if there is just 1 cpu
if (numCpu === 1 || !cluster.isMaster) {
return callable();
}
// saving 1 cpu for master process (1 M + N instances)
// or create 2 instances since 1 M + 1 Instance
// is ineffective when respawning instance
// better to have 1 M + 2 instances if cpu count 2
const instances = numCpu > 2 ? numCpu - 1 : numCpu;
console.log('Starting', instances, 'instances');
for (let i = 0; i < instances; i++, cluster.fork());
};
Keep app.js simple like this for modularity and testability (read about supertest):
'use strict';
const express = require("express");
const app = express();
app.get("/path", somehandler);
module.exports = app;
Serving the app at some port must be handled by different module, so have server.js look like this:
'use strict';
const start = require('./cluster');
start(() => {
const http = require('http');
const app = require('./app');
const listenHost = process.env.HOST || '127.0.0.1';
const listenPort = process.env.PORT || 8080;
const httpServer = http.createServer(app);
httpServer.listen(listenPort, listenHost,
() => console.log('App listening at http://'+listenHost+':'+listenPort));
});
You may add in package.json such line in scripts section:
"scripts": {
"start": "node server.js",
"watch": "nodemon server.js",
...
}
Run the app using:
node server.js, nodemon server.js
or
npm start, npm run watch
Step 2 - when needed containerization:
Keep code structure like in Step 1 and use docker
Cluster module will get cpu resources which provided by container orkestrator
and as an extra You'll have ability to scale docker instances on demand using docker swarm, kubernetes, dc/os and etc.
Dockerfile :
FROM node:alpine
ENV PORT=8080
EXPOSE $PORT
ADD ./ /app
WORKDIR /app
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh
RUN npm i
CMD ["npm", "start"]

Node.js cluster error

Hello i am very new to node.js and javascript, i am trying to create a culster.js with the nodejs cluster module, at the end of my if statement i am calling server.js to start the app.
cluster.js
const cluster = require('cluster');
const cpuCount = require('os').cpus().length;
const startServer = require('./server');
if (cluster.isMaster) {
for (let i = 0; i < cpuCount; i += 1) {
cluster.fork();
}
cluster.on('exit', () => {
cluster.fork();
});
} else {
return startServer;
}
server.js
const fs = require('fs');
const path = require('path');
const express = require('express');
const auth = require('http-auth');
const {
createBundleRenderer,
} = require('vue-server-renderer');
const bundle = fs.readFileSync('dist/server.js', 'utf-8');
const renderer = createBundleRenderer(bundle);
function parseIndexHtml() {
const [
entire,
htmlOpen,
htmlOpenTailAndHead,
headCloseAndBodyOpen,
bodyOpenTailAndContentBeforeApp,
contentAfterAppAndHtmlClose,
] = fs.readFileSync('index.html', 'utf8').match(/^([\s\S]+?<html)([\s\S]+?)(<\/head>[\s\S]*?<body)([\s\S]+?)<div id="?app"?><\/div>([\s\S]+)$/);
return {
entire,
htmlOpen,
htmlOpenTailAndHead,
headCloseAndBodyOpen,
bodyOpenTailAndContentBeforeApp,
contentAfterAppAndHtmlClose,
};
}
const indexHtml = parseIndexHtml();
const app = express();
const basicAuth = auth.basic({
realm: 'Jobportal',
}, (username, password, callback) => {
callback(username === 'x' && password === 'x');
});
app.get('/ping', (request, response) => {
response.status(200).end();
});
app.use(auth.connect(basicAuth));
// serve pure static assets
app.use('/public', express.static(path.resolve('./public')));
app.use('/dist', express.static(path.resolve('./dist')));
app.get('*', (request, response) => {
const context = {
url: request.url,
};
renderer.renderToString(context, (error, html) => {
if (error) {
if (error.code === '404') {
response.status(404).end(indexHtml.entire);
} else {
response.status(500).end(indexHtml.entire);
console.error(`Error during render: ${request.url}`); // eslint-disable-line
console.error(error); // eslint-disable-line
}
return;
}
const {
title,
htmlAttrs,
bodyAttrs,
link,
style,
script,
noscript,
meta,
} = context.meta.inject();
response.write(
`${indexHtml.htmlOpen} data-vue-meta-server-rendered ${htmlAttrs.text()} ${indexHtml.htmlOpenTailAndHead}
${meta.text()}
${title.text()}
${link.text()}
${style.text()}
${script.text()}
<script>
window.__INITIAL_STATE__ = ${JSON.stringify(context.initialState)}
</script>
${noscript.text()}
${indexHtml.headCloseAndBodyOpen} ${bodyAttrs.text()} ${indexHtml.bodyOpenTailAndContentBeforeApp}
${html}
<script src="/dist/client.js"></script>
${indexHtml.contentAfterAppAndHtmlClose}`
);
response.end();
});
});
const port = 8181;
// start server
app.listen(port, () => {
console.log(`server started at port ${port}`); // eslint-disable-line
});
I get an error
server started at port 8181
events.js:163
throw er; // Unhandled 'error' event
^
Error: bind EADDRINUSE null:8181
at Object.exports._errnoException (util.js:1050:11)
at exports._exceptionWithHostPort (util.js:1073:20)
at listenOnMasterHandle (net.js:1336:16)
at rr (internal/cluster/child.js:111:12)
at Worker.send (internal/cluster/child.js:78:7)
at process.onInternalMessage (internal/cluster/utils.js:42:8)
at emitTwo (events.js:111:20)
at process.emit (events.js:194:7)
at process.nextTick (internal/child_process.js:766:12)
at _combinedTickCallback (internal/process/next_tick.js:73:7)
events.js:163
throw er; // Unhandled 'error' event
^
Any ideas why ?
EADDRINUSE means that the port number which listen() tries to bind the server to is already in use.
You need to verify if the port is already taken on your system. To do that:
On linux: sudo netstat -nltp | grep (port) in your case is port 8181.
On OSX: sudo lsof -i -P | grep (port)
If you have a result, you need to kill the process (kill <pid>).
You should check if pm2 list returns 0 process. In addition, when you do a pm2 stopAll, the socket is not released. Don't forget to do a pm2 kill to be sure the daemon is killed.
$ pm2 kill
Daemon killed
Verifying for Windows:
C:\> netstat -a -b
a Displays all connections and listening ports.
b Displays the executable involved in creating each connection or listening port. In some cases well-known executables host multiple independent components, and in these cases the sequence of components involved in creating the connection or listening port is displayed. In this case the executable name is in [] at the bottom, on top is the component it called, and so forth until TCP/IP was reached. Note that this option can be time-consuming and will fail unless you have sufficient permissions.
n Displays addresses and port numbers in numerical form.
o Displays the owning process ID associated with each connection.
EXAMPLES to kill in windows command line:
If you know the name of a process to kill, for example notepad.exe, use the following command from a command prompt to end it:
taskkill /IM notepad.exe
To kill a single instance of a process, specify its process id (PID). For example, if the desired process has a PID of 827, use the following command to kill it:
taskkill /PID 827

Mocha API Testing: getting 'TypeError: app.address is not a function'

My Issue
I've coded a very simple CRUD API and I've started recently coding also some tests using chai and chai-http but I'm having an issue when running my tests with $ mocha.
When I run the tests I get the following error on the shell:
TypeError: app.address is not a function
My Code
Here is a sample of one of my tests (/tests/server-test.js):
var chai = require('chai');
var mongoose = require('mongoose');
var chaiHttp = require('chai-http');
var server = require('../server/app'); // my express app
var should = chai.should();
var testUtils = require('./test-utils');
chai.use(chaiHttp);
describe('API Tests', function() {
before(function() {
mongoose.createConnection('mongodb://localhost/bot-test', myOptionsObj);
});
beforeEach(function(done) {
// I do stuff like populating db
});
afterEach(function(done) {
// I do stuff like deleting populated db
});
after(function() {
mongoose.connection.close();
});
describe('Boxes', function() {
it.only('should list ALL boxes on /boxes GET', function(done) {
chai.request(server)
.get('/api/boxes')
.end(function(err, res){
res.should.have.status(200);
done();
});
});
// the rest of the tests would continue here...
});
});
And my express app files (/server/app.js):
var mongoose = require('mongoose');
var express = require('express');
var api = require('./routes/api.js');
var app = express();
mongoose.connect('mongodb://localhost/db-dev', myOptionsObj);
// application configuration
require('./config/express')(app);
// routing set up
app.use('/api', api);
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('App listening at http://%s:%s', host, port);
});
and (/server/routes/api.js):
var express = require('express');
var boxController = require('../modules/box/controller');
var thingController = require('../modules/thing/controller');
var router = express.Router();
// API routing
router.get('/boxes', boxController.getAll);
// etc.
module.exports = router;
Extra notes
I've tried logging out the server variable in the /tests/server-test.js file before running the tests:
...
var server = require('../server/app'); // my express app
...
console.log('server: ', server);
...
and I the result of that is an empty object: server: {}.
You don't export anything in your app module. Try adding this to your app.js file:
module.exports = server
It's important to export the http.Server object returned by app.listen(3000) instead of just the function app, otherwise you will get TypeError: app.address is not a function.
Example:
index.js
const koa = require('koa');
const app = new koa();
module.exports = app.listen(3000);
index.spec.js
const request = require('supertest');
const app = require('./index.js');
describe('User Registration', () => {
const agent = request.agent(app);
it('should ...', () => {
This may also help, and satisfies #dman point of changing application code to fit a test.
make your request to the localhost and port as needed
chai.request('http://localhost:5000')
instead of
chai.request(server)
this fixed the same error message I had using Koa JS (v2) and ava js.
The answers above correctly address the issue: supertest wants an http.Server to work on. However, calling app.listen() to get a server will also start a listening server, this is bad practice and unnecessary.
You can get around by this by using http.createServer():
import * as http from 'http';
import * as supertest from 'supertest';
import * as test from 'tape';
import * as Koa from 'koa';
const app = new Koa();
# add some routes here
const apptest = supertest(http.createServer(app.callback()));
test('GET /healthcheck', (t) => {
apptest.get('/healthcheck')
.expect(200)
.expect(res => {
t.equal(res.text, 'Ok');
})
.end(t.end.bind(t));
});
Just in case, if someone uses Hapijs the issue still occurs, because it does not use Express.js, thus address() function does not exist.
TypeError: app.address is not a function
at serverAddress (node_modules/chai-http/lib/request.js:282:18)
The workaround to make it work
// this makes the server to start up
let server = require('../../server')
// pass this instead of server to avoid error
const API = 'http://localhost:3000'
describe('/GET token ', () => {
it('JWT token', (done) => {
chai.request(API)
.get('/api/token?....')
.end((err, res) => {
res.should.have.status(200)
res.body.should.be.a('object')
res.body.should.have.property('token')
done()
})
})
})
Export app at the end of the main API file like index.js.
module.exports = app;
We had the same issue when we run mocha using ts-node in our node + typescript serverless project.
Our tsconfig.json had "sourceMap": true . So generated, .js and .js.map files cause some funny transpiling issues (similar to this). When we run mocha runner using ts-node. So, I will set to sourceMap flag to false and deleted all .js and .js.map file in our src directory. Then the issue is gone.
If you have already generated files in your src folder, commands below would be really helpful.
find src -name ".js.map" -exec rm {} \;
find src -name ".js" -exec rm {} \;
I am using Jest and Supertest, but was receiving the same error. It was because my server takes time to setup (it is async to setup db, read config, etc). I needed to use Jest's beforeAll helper to allow the async setup to run. I also needed to refactor my server to separate listening, and instead use #Whyhankee's suggestion to create the test's server.
index.js
export async function createServer() {
//setup db, server,config, middleware
return express();
}
async function startServer(){
let app = await createServer();
await app.listen({ port: 4000 });
console.log("Server has started!");
}
if(process.env.NODE_ENV ==="dev") startServer();
test.ts
import {createServer as createMyAppServer} from '#index';
import { test, expect, beforeAll } from '#jest/globals'
const supertest = require("supertest");
import * as http from 'http';
let request :any;
beforeAll(async ()=>{
request = supertest(http.createServer(await createMyAppServer()));
})
test("fetch users", async (done: any) => {
request
.post("/graphql")
.send({
query: "{ getQueryFromGqlServer (id:1) { id} }",
})
.set("Accept", "application/json")
.expect("Content-Type", /json/)
.expect(200)
.end(function (err: any, res: any) {
if (err) return done(err);
expect(res.body).toBeInstanceOf(Object);
let serverErrors = JSON.parse(res.text)['errors'];
expect(serverErrors.length).toEqual(0);
expect(res.body.data.id).toEqual(1);
done();
});
});
Edit:
I also had errors when using data.foreach(async()=>..., should have use for(let x of... in my tests

Connect to MySQL using SSH Tunneling in node-mysql

When using the node-mysql npm package, is it possible to connect to the MySQL server using a SSH key instead of a password?
You can do the SSH tunnel component completely independently, and then point node-mysql (or any other sql client...) to your DB by using TCP tunneled over SSH.
Just set up your SSH tunnel like this
ssh -N -p 22 sqluser#remoteserverrunningmysql.your.net -L 33306:localhost:3306
Leave that going in the background (see articles like this for more in depth info).
Then just send any MySQL client to port 33306 on localhost. It will actually connect as though you are on your remote server and using port 3306.
Thanks so much Steve your answer help me alot. just to make it clearer use
ssh -f user#personal-server.com -L 2000:personal-server.com:25 -N
The -f tells ssh to go into the background just before it executes the command. This is followed by the username and server you are logging into. The -L 2000:personal-server.com:25 is in the form of -L local-port:host:remote-port. Finally the -N instructs OpenSSH to not execute a command on the remote system
To connect to mongo use whatever port you set as Local port (in this case the port is 2000)
For example let's say I want to connect on a remote server with IP 192.168.0.100 and mongo is running on port 27017.
Assume a user called elie with password eliepassword has access to ssh on port 22 I will have to do this
First run on the terminal the following :
ssh -f elie#192.168.0.100 -L 2002:127.0.0.1:27017 -N
In my mongo connection I will do :
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost:2002/mydatabase');
module.exports = mongoose.connection;
I hope this makes it clear.
const mysql = require('mysql2');
const { Client } = require('ssh2');
const sshClient = new Client();
const dbServer = {
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
database: process.env.DB_DATABASE
}
const tunnelConfig = {
host: process.env.DB_SSH_HOST,
port: 22,
username: process.env.DB_SSH_USER,
password: process.env.DB_SSH_PASSWORD
}
const forwardConfig = {
srcHost: '127.0.0.1',
srcPort: 3306,
dstHost: dbServer.host,
dstPort: dbServer.port
};
const SSHConnection = new Promise((resolve, reject) => {
sshClient.on('ready', () => {
sshClient.forwardOut(
forwardConfig.srcHost,
forwardConfig.srcPort,
forwardConfig.dstHost,
forwardConfig.dstPort,
(err, stream) => {
if (err) reject(err);
const updatedDbServer = {
...dbServer,
stream
};
const connection = mysql.createConnection(updatedDbServer);
connection.connect((error) => {
if (error) {
reject(error);
}
resolve(connection);
});
});
}).connect(tunnelConfig);
});

Categories

Resources