PSQL Database stops responding after a while - javascript

I've been using PSQL for over 2 years now, this issue started occurring 3 Months ago.
The database would stop responding after a day of runtime until the affected Node.js process is restarted.
4 Days ago this issue got much worse, unless the host OS was restarted the database stops responding within minutes or less of process runtime.
This issue occurs in only one Node.js process, I have about 4 other Node.js processes running perfectly fine, so it's most likely an issue with my code.
Highest statistics for the affected process:
10 Sessions (constantly stays at that number)
90000 Transactions Per Second (Transactions)
140 Tuples in (Updates)
8000000 Tuples out (Returned)
180000 Block I/O (Hits)
I have tried:
Re-starting Postgres
Re-installing Postgres
using pg-pool (Runs into error: Connection timed out)
using pg-promise (I'm not sure how to apply this module without spamming tasks or connections)
No Errors are emitted, and the connection becomes increasingly slow over several minutes until the pgAdmin Dashboard basically flatlines and no further response is received.
Code:
Pool creation (initiated on startup):
const { Pool } = require('pg');
const auth = require('./auth.json');
const ch = require('./ClientHelper');
const pool = new Pool({
user: 'postgres',
host: 'localhost',
database: 'Ayako-v1.5',
password: auth.pSQLpw,
port: 5432,
});
pool.query('SELECT NOW() as now;', (err) => {
if (err) {
ch.logger("| Couldn't connect to DataBase", err.stack);
} else {
console.log('| Established Connection to DataBase');
}
});
pool.connect((err) => {
if (err) {
ch.logger('Error while logging into DataBase', err.stack);
}
});
pool.on('error', (err) => {
ch.logger('Unexpected error on idle pool client', err);
});
module.exports = pool;
Queries are executed via:
const query = async (query, arr, debug) => {
const pool = require('./DataBase');
if (debug === true) console.log(query, arr);
return pool.query(query, arr).catch((err) => {
console.log(query, arr);
module.exports.logger('Pool Query Error', err);
return null;
});
Queries arrive at the above query function but never receive a response.
File Links:
https://github.com/Larsundso/Ayako-v1.5/blob/main/Files/BaseClient/DataBase.js
https://github.com/Larsundso/Ayako-v1.5/blob/f2110f3cd73325b35a617fe58d19d8d9c46659d9/Files/BaseClient/ClientHelper.js#L215
Versions
PSQL - v14 |
Node.js - v17.8.0 |
Linux - Ubuntu 20.04.4 LTS

I appreciate everyone's help here, logging executed queries brought me on the right track.
The issue was the number of requests handled exceeded PostgreSQL's capabilities, making the queries stack up into timing out.
My solution to this is coupling redis with pSQL to circumvent unnecessarily accessing the Database.

Related

Why am I getting a timeout error when exporting a mariadb connection pool in Node.js?

EDIT
I found the error. The mistake was very obvious: I did not include the
require("dotenv").config(); in the connection.js file. Without this, the database connection simply fails after a timeout because it does not have any connection details.
I found an update log from the Mariadb Node.js connector team stating they have a few errors where Mariadb does not provide sufficient error messages (it sometimes only offers a "timeout" without further information), so I changed what I was looking for, and found the mistake.
For anyone getting a similar error message, this can mean anything, so check all parts of your code!
Original Post
I am trying to get familiar with Nodejs and express, but ran into an issue that I can't seem to solve:
When creating a Mariadb database pool in a seperate file, and exporting the pool using module.exports, I am having trouble using the same pool in another file. I get a timeout error when trying to use the pool to query a database.
If I use the exact same code in the same file instead of two separate files, the query works perfectly, so I think there is something going wrong during module.exports = pool.
Am I missing something? Thanks in advance!
I have two files:
index.js:
// import express web framework
const express = require("express");
//create an express application
const app = express();
const pool = require('./database/connection')
const cors = require('cors');
//middleware
app.use(cors())
app.use(express.json())
getData = async () => {
data = await pool.query("call stored_procedure")
console.log (data)
}
getData()
app.listen(3001, () => {
console.log('Serving running on port 3001')
})
and connection.js:
//import mariadb library
const mariadb = require("mariadb")
//function that create mariadb connection pool for database
const createPool = () => {
try {
return (
mariadb.createPool({
connectionLimit: 10,
host: process.env.MARIADB_HOST,
user: process.env.MARIADB_USER,
password: process.env.MARIADB_PASSWORD,
database: process.env.MARIADB_DB,
port: 3306
})
)
}
catch (err) {
console.error('Failed to connect to database: ')
console.error(err)
}
}
const pool = createPool()
//export database connection pool
module.exports = pool
Running this app results in the following error (after some time):
path_to_dir/node_modules/mariadb/lib/misc/errors.js:57
return new SqlError(msg, sql, fatal, info, sqlState, errno, additionalStack, addHeader);
^
SqlError: (conn=-1, no: 45028, SQLState: HY000) retrieve connection from pool timeout after 10001ms
(pool connections: active=0 idle=0 limit=10)
at Object.module.exports.createError (path_to_dir/node_modules/mariadb/lib/misc/errors.js:57:10)
at Pool._requestTimeoutHandler (path_to_dir/node_modules/mariadb/lib/pool.js:345:26)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7) {
text: 'retrieve connection from pool timeout after 10001ms\n' +
' (pool connections: active=0 idle=0 limit=10)',
sql: null,
fatal: false,
errno: 45028,
sqlState: 'HY000',
code: 'ER_GET_CONNECTION_TIMEOUT'
}
I found the error. The mistake was very obvious: I did not include the require("dotenv").config(); in the connection.js file. Without this, the database connection simply fails after a timeout because it does not have any connection details. I found an update log from the Mariadb Node.js connector team stating they have a few errors where Mariadb does not provide sufficient error messages (it sometimes only offers a "timeout" without further information), so I changed what I was looking for, and found the mistake.
For anyone getting a similar error message, this can mean anything, so check all parts of your code!

Unable to make request to my mariadb database using node server

I'm currently learning how to setup a node server and I'm making an API that performs some requests on my MariaDB database hosted on my VPS.
The problem is that when I make a POST request which makes a SQL request to the database, the connection times out and the server shuts down.
I have tried to add new users to MariaDB with all privileges, I tried use sequelize too.
But none of those solutions work, it still times out every time I make a query to my database.
I can connect to phpmyadmin and make some request on it, so I think that my database is running fine.
Here is my code:
router.post('/login', async function(req,res) {
let conn;
try {
// establish a connection to MariaDB
conn = await pool.getConnection();
// create a new query
var query = "select * from people";
// execute the query and set the result to a new variable
var rows = await conn.query(query);
// return the results
res.send(rows);
} catch (err) {
throw err;
} finally {
if (conn) return conn.release();
}
})
The way I connect to my database in my database.js file
const pool = mariadb.createPool({
host: process.env.DATABASE_HOST,
user: process.env.DATABASE_USER,
password: process.env.DATABASE_PASSWORD,
database: process.env.DATABSE_NAME,
});
// Connect and check for errors
module.exports={
getConnection: function(){
return new Promise(function(resolve,reject){
pool.getConnection().then(function(connection){
resolve(connection);
}).catch(function(error){
reject(error);
});
});
}
}
module.exports = pool;
And my error:
Node.js v17.0.1
[nodemon] app crashed - waiting for file changes before starting...
[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Server started
/Users/alexlbr/WebstormProjects/AlloEirb/server/node_modules/mariadb/lib/misc/errors.js:61
return new SqlError(msg, sql, fatal, info, sqlState, errno, additionalStack, addHeader);
^
SqlError: retrieve connection from pool timeout after 10001ms
at Object.module.exports.createError (/Users/alexlbr/WebstormProjects/AlloEirb/server/node_modules/mariadb/lib/misc/errors.js:61:10)
at timeoutTask (/Users/alexlbr/WebstormProjects/AlloEirb/server/node_modules/mariadb/lib/pool-base.js:319:16)
at Timeout.rejectAndResetTimeout [as _onTimeout] (/Users/alexlbr/WebstormProjects/AlloEirb/server/node_modules/mariadb/lib/pool-base.js:342:5)
at listOnTimeout (node:internal/timers:559:11)
at processTimers (node:internal/timers:500:7) {
text: 'retrieve connection from pool timeout after 10001ms',```
Three possibilities come to mind:
There is a typo in database name:
database: process.env.DATABSE_NAME
database: process.env.DATABASE_NAME
Your environment variables are not being properly set. Are you using dotenv to load these from an .env file?
https://www.npmjs.com/package/dotenv
If not, how are you setting the process.env values at runtime?
If the environment values are indeed set:
verify that these environment values are correct
verify which interface your MariaDB server is listening on:
It's possible the server is using a bind-address configuration and only listening on 127.0.0.1 (which is the default on Debian/Ubuntu)
You want to make sure the server is listening on: 0.0.0.0 (all interfaces, not only localhost)

Redis connection is lost after multiple calls to function

The program I am writing is a status display screen for alarms, each of which is represented by a channel.
When the server is started (run on a vagrant virtual machine), an Influx database is accessed, the data (comprising of 1574 'channels') is processed and put into a Redis database. This runs fine and the GUI is displayed with no issues when the webpage is refreshed, although it takes a long time to load (up to 20s), and nearly all of this time is spent in the method below.
However, after a few refreshes/moving around the site, it often crashes with the following error:
{ AbortError: Redis connection lost and command aborted. It might
have been processed.
at RedisClient.flush_and_error (/vagrant/node_modules/redis/index.js:362:23)
at RedisClient.connection_gone (/vagrant/node_modules/redis/index.js:664:14)
at RedisClient.on_error (/vagrant/node_modules/redis/index.js:410:10)
at Socket. (/vagrant/node_modules/redis/index.js:279:14)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at onwriteError (_stream_writable.js:417:12)
at onwrite (_stream_writable.js:439:5)
at _destroy (internal/streams/destroy.js:39:7)
at Socket._destroy (net.js:568:3) code: 'UNCERTAIN_STATE', command: 'HGETALL', args: [
'vista:hash:Result:44f59707-c873-11e8-93b9-7f551d0bdd1f' ], origin:
{ Error: Redis connection to 127.0.0.1:6379 failed - write EPIPE
at WriteWrap.afterWrite (net.js:868:14) errno: 'EPIPE', code: 'EPIPE', syscall: 'write' } }
This error is displayed 1574 times (once for each channel), and occurs when the program reaches this function:
Result.getFormattedResults = async function (cycle) {
const channels = await Channel.findAndLoad()
const formattedResults = await mapAsyncParallel(channels, async channel => {
const result = await this.findAndLoadByChannel(channel, cycle)
const formattedResult = await result.format(channel)
return formattedResult
})
return formattedResults
}
mapAsyncParallel() is as follows:
export const mapAsyncParallel = (arr, fn, thisArg) => {
return Promise.all(arr.map(fn, thisArg))
}
findAndLoadByChannel() finds the channel and loads it with this line:
const resultModel = await this.load(resultId)
And format() takes the model and outputs the data as in a JSON format
There are two 'fetch(...)' commands (which are needed and cannot be combined) in the front end, and the problem rarely occurs when I comment out one of them (either one). This is making me think it could be a max memory or max connections problem? (increasing maxmemory in the config file didn't help). Or a problem with using so many promises (a concept I am fairly new to).
This has only started to occur as I have added more functionality and I assume the function needs optimizing but I have taken over this project from someone else and am still quite new to node.js and redis.
Versions:
Vagrant: 2.0.1
Ubuntu: 16.04.5
Redis: 4.0.9
Node: 8.12.0
npm: 5.7.1
I've now moved all the 'getting' of the data (from redis) to the server side channels.controller file.
So, where before I would have:
renderPage: async (req, res) => {
res.render('page')
},
I now have a method like:
renderPage: async (req, res) => {
const data1 = getData1()
const data2 = getData2()
res.render('page', {data1, data2})
},
(Don't worry, these aren't my actual variable names)
Where the two 'data' variables were previously retrieved using the 'fetch' method.
I export the data once it's loaded into redis, and import it in the controller file, where I have the getters to combine it all into one return array.
The pages now take milliseconds to refresh and I haven't had any crashes

Helenus isn't able to connect

I am developing an application in nodejs with cassandra driver using helenus. the version of helenus is 0.6.10.
This is app.js.
var helenus = require('helenus');
var pool = new helenus.ConnectionPool({
hosts: ['localhost:9160'],
keyspace: 'test_dev',
user: '',
password: ''
});
pool.connect(function(err, keyspace) {
if (err) throw err;
console.log('Listening on port 3000....');
});
pool.on('error', function(err) {
if (err) throw err;
});
When we call pool.connect then it is throwing following error in the callback.
error name : "HelenusNoAvailableNodesException"
error message: "Could Not Connect To Any Nodes"
When i have gone through the troubleshooting the problem. I have found that onDescribe method in Connection.prototype.use method is being throwing an error which is "NotFoundException".
What i am doing wrong? Any help.
First check your Cassandra version. If you are running Cassandra 1.2 or greater, you should really be using the Datastax NodeJS Driver. There really isn't any reason to be using Thrift in 1.2 or greater as the performance and features of CQL greatly outweigh Thrift. Also, while the Thrift server is still available for use, no development effort is given to it.
If you are absolutely sure you need to be using Thrift then first ensure the keyspace exists. Helenus requires a keyspace to connect to, so if the keyspace is not present it will not be able to connect to any nodes. If the keyspace exists then run:
nodetool statusthrift
if it says anything other than running then run nodetool enablethrift and try again.
If thrift is running then I would check the interface configured in your cassandra.yaml. The rpc_address should match the interface you are connecting to from the client. If you unsure of the interface then just set it to 0.0.0.0. The rpc_port should be 9160. After changing any settings in the cassandra.yaml you will need to restart the cassandra service on each of the nodes in the cluster.

node.js socket.io script getting killed by SIGSEGV after 1-2 days

I am running my node.js server by forever and my script gets killed in 1-2 days and i get this error in the log file:
error: Forever detected script was killed by signal: SIGSEGV
Now i have many functions in my node.js script. Upon writing a console.log at the beginning of each function i ended up getting this in the log:
info: transport end (undefined)
debug: set close timeout for client CbU1mvlYaIvDWHB4ChQa
debug: cleared close timeout for client CbU1mvlYaIvDWHB4ChQa
disconnection function
debug: discarding transport
debug: clearing poll timeout
debug: client authorized
info: handshake authorized 2O3m1B3dGWFOJ4W9ChQc
error: Forever detected script was killed by signal: SIGSEGV
the log makes it seem as if either the connect or the disconnect function has a problem, but as the script seg faults after 2 days of running and over 10000 connections/disconnections i think that that might not be really the problem.
Here are my connection and disconnection functions. i also connect to my pgsql database via node-dbi:
var DBWrapper = require('node-dbi').DBWrapper;
var DBExpr = require('node-dbi').DBExpr;
var dbConnectionConfig = { host: 'localhost', user: 'user', password: 'pass', database: 'dbname' };
dbWrapper = new DBWrapper( "pg", dbConnectionConfig );
dbWrapper.connect();
io.sockets.on('connection', function(socket) {
console.log("socket connection");
socket.on('set username', function(userName) {
var milliseconds = (new Date).getTime();
var data = { socketid: socket.id, time: milliseconds };
dbWrapper.insert('all_sockets', data , function(err) {
});
});
socket.on('disconnect', function() {
console.log("disconnection function");
dbWrapper.remove('all_sockets', [['socketid=?', socket.id]] , function(err) {} );
});
});
where could the segment fault be coming from?
I would recommend using a segfault handler to determine the STDERR. This way you will have some more useful debug info.
You can find one here

Categories

Resources