I im using the following code to manage my server (install, reinstall, restart, shutdown etc...)
/* SERVER - type */
switch(server.type) {
/* INFO - server */
case 1:
/* MANAGE - ssh login */
conn.on('ready', function() {
/* MANAGE - info */
conn.exec('uptime && cat /proc/cpuinfo', function(err, stream) {
stream.on('close',function(code, signal) {
/* MANAGE - info */
console.log('server info closed...');
conn.end();
}).on('data', function(data) {
console.log('server info...'+data);
conn.end();
}).on('error', function(err) {
console.log('server error...'+err);
conn.end();
})
})
}).connect({
host: '192.168.1.1',
port: '22',
username: 'root',
password: 'myserverpass'
})
})
break;
}
Problem is that when this exeuctes i get from console.log correct data and after i get data it contiunues to execute in loop from beginning so i need to get only this only once:
console.log('server info...'+blablabla);
and i get this:
console.log('server info...'+blablabla);
console.log('server info...'+blablabla);
console.log('server info...'+blablabla);
console.log('server info...'+blablabla);
console.log('server info...'+blablabla);
so how can i exit from switch when command is successfuly execute so that command does not get into loop?
I im using this to connect throught SSH on node:
https://github.com/mscdex/ssh2
I have previus used switch case statement and no problem executing in loop...i was thinking that ssh2 need return or conn.end() event to exit from loop?
Switch cases don't loop on their own. So your switch case must be inside a loop that is causing it to be run more than once. As the code loops if any cases match those cases will be triggered on each loop.
You're best bet is to put your connection into a function and call it inside you loop only if the connection has not already been made. This way you can have one simple script that handles connections and all you have to do is pass the connoection params to the function.
Related
I've been using PSQL for over 2 years now, this issue started occurring 3 Months ago.
The database would stop responding after a day of runtime until the affected Node.js process is restarted.
4 Days ago this issue got much worse, unless the host OS was restarted the database stops responding within minutes or less of process runtime.
This issue occurs in only one Node.js process, I have about 4 other Node.js processes running perfectly fine, so it's most likely an issue with my code.
Highest statistics for the affected process:
10 Sessions (constantly stays at that number)
90000 Transactions Per Second (Transactions)
140 Tuples in (Updates)
8000000 Tuples out (Returned)
180000 Block I/O (Hits)
I have tried:
Re-starting Postgres
Re-installing Postgres
using pg-pool (Runs into error: Connection timed out)
using pg-promise (I'm not sure how to apply this module without spamming tasks or connections)
No Errors are emitted, and the connection becomes increasingly slow over several minutes until the pgAdmin Dashboard basically flatlines and no further response is received.
Code:
Pool creation (initiated on startup):
const { Pool } = require('pg');
const auth = require('./auth.json');
const ch = require('./ClientHelper');
const pool = new Pool({
user: 'postgres',
host: 'localhost',
database: 'Ayako-v1.5',
password: auth.pSQLpw,
port: 5432,
});
pool.query('SELECT NOW() as now;', (err) => {
if (err) {
ch.logger("| Couldn't connect to DataBase", err.stack);
} else {
console.log('| Established Connection to DataBase');
}
});
pool.connect((err) => {
if (err) {
ch.logger('Error while logging into DataBase', err.stack);
}
});
pool.on('error', (err) => {
ch.logger('Unexpected error on idle pool client', err);
});
module.exports = pool;
Queries are executed via:
const query = async (query, arr, debug) => {
const pool = require('./DataBase');
if (debug === true) console.log(query, arr);
return pool.query(query, arr).catch((err) => {
console.log(query, arr);
module.exports.logger('Pool Query Error', err);
return null;
});
Queries arrive at the above query function but never receive a response.
File Links:
https://github.com/Larsundso/Ayako-v1.5/blob/main/Files/BaseClient/DataBase.js
https://github.com/Larsundso/Ayako-v1.5/blob/f2110f3cd73325b35a617fe58d19d8d9c46659d9/Files/BaseClient/ClientHelper.js#L215
Versions
PSQL - v14 |
Node.js - v17.8.0 |
Linux - Ubuntu 20.04.4 LTS
I appreciate everyone's help here, logging executed queries brought me on the right track.
The issue was the number of requests handled exceeded PostgreSQL's capabilities, making the queries stack up into timing out.
My solution to this is coupling redis with pSQL to circumvent unnecessarily accessing the Database.
I'm currently learning how to setup a node server and I'm making an API that performs some requests on my MariaDB database hosted on my VPS.
The problem is that when I make a POST request which makes a SQL request to the database, the connection times out and the server shuts down.
I have tried to add new users to MariaDB with all privileges, I tried use sequelize too.
But none of those solutions work, it still times out every time I make a query to my database.
I can connect to phpmyadmin and make some request on it, so I think that my database is running fine.
Here is my code:
router.post('/login', async function(req,res) {
let conn;
try {
// establish a connection to MariaDB
conn = await pool.getConnection();
// create a new query
var query = "select * from people";
// execute the query and set the result to a new variable
var rows = await conn.query(query);
// return the results
res.send(rows);
} catch (err) {
throw err;
} finally {
if (conn) return conn.release();
}
})
The way I connect to my database in my database.js file
const pool = mariadb.createPool({
host: process.env.DATABASE_HOST,
user: process.env.DATABASE_USER,
password: process.env.DATABASE_PASSWORD,
database: process.env.DATABSE_NAME,
});
// Connect and check for errors
module.exports={
getConnection: function(){
return new Promise(function(resolve,reject){
pool.getConnection().then(function(connection){
resolve(connection);
}).catch(function(error){
reject(error);
});
});
}
}
module.exports = pool;
And my error:
Node.js v17.0.1
[nodemon] app crashed - waiting for file changes before starting...
[nodemon] restarting due to changes...
[nodemon] starting `node server.js`
Server started
/Users/alexlbr/WebstormProjects/AlloEirb/server/node_modules/mariadb/lib/misc/errors.js:61
return new SqlError(msg, sql, fatal, info, sqlState, errno, additionalStack, addHeader);
^
SqlError: retrieve connection from pool timeout after 10001ms
at Object.module.exports.createError (/Users/alexlbr/WebstormProjects/AlloEirb/server/node_modules/mariadb/lib/misc/errors.js:61:10)
at timeoutTask (/Users/alexlbr/WebstormProjects/AlloEirb/server/node_modules/mariadb/lib/pool-base.js:319:16)
at Timeout.rejectAndResetTimeout [as _onTimeout] (/Users/alexlbr/WebstormProjects/AlloEirb/server/node_modules/mariadb/lib/pool-base.js:342:5)
at listOnTimeout (node:internal/timers:559:11)
at processTimers (node:internal/timers:500:7) {
text: 'retrieve connection from pool timeout after 10001ms',```
Three possibilities come to mind:
There is a typo in database name:
database: process.env.DATABSE_NAME
database: process.env.DATABASE_NAME
Your environment variables are not being properly set. Are you using dotenv to load these from an .env file?
https://www.npmjs.com/package/dotenv
If not, how are you setting the process.env values at runtime?
If the environment values are indeed set:
verify that these environment values are correct
verify which interface your MariaDB server is listening on:
It's possible the server is using a bind-address configuration and only listening on 127.0.0.1 (which is the default on Debian/Ubuntu)
You want to make sure the server is listening on: 0.0.0.0 (all interfaces, not only localhost)
I'm running protractor on my windows VM and need to execute some commands on a linux VM. I'm trying to use SSH to do the same. I've tried using 'simple-ssh', 'remote-exec' and 'ssh-exec. The problem with all of them is the same, the protractor test completes without any error but the SSH connection is not established. Strangely it doesn't throw any error as well, I've tried giving wrong IP, but still, no error is thrown. I've tried SSH over python with same machine, it works flawlessly.
here is a piece of code from documentation that I directly tried to use.
var ssh = new SSH({
host: 'xx.xx.xxx.xx',
user: 'xxxxx',
pass: 'xxxxx'
});
ssh.exec('ls -lh', {
out: function(stdout) {
console.log(stdout);
}
}).start();
Figured it out.
I used ssh2 package to establish an interactive SSH session. Then I synchronized it with jasmine using done() in jasmine 2.
Used Maciej Ciach's solution for solving sync problem.
Here's an 'It' block that runs flawlessly
it("trying ssh connection", function (done) {
var conn = new Client();
conn.on('ready', function () {
console.log('Client :: ready');
conn.shell(function (err, stream) {
if (err) throw err;
stream.on('close', function () {
console.log('Stream :: close');
conn.end();
}).on('data', function (data) {
console.log('OUTPUT: ' + data);
});
stream.end('ls \nexit\n');
done();
});
}).connect({
host: 'xx.xx.xxx.xx',
port: 22,
username: 'x',
privateKey: require('fs').readFileSync('file_path')
});
})
Obviously, you need to add your public ssh key to the list of trusted keys on your server first.
You can read about it here.
If you are on windows then execute those commands in Powershell.
I have the following Node.js code to ssh into a server and it works great forwarding stdout, but whenever I type anything it is not forwarding to the server. How do I forward my local stdin to the ssh connections stdin?
var command = 'ssh -tt -i ' + keyPath + ' -o StrictHostKeyChecking=no ubuntu#' + hostIp;
var ssh = child_proc.exec(command, {
env: process.env
});
ssh.stdout.on('data', function (data) {
console.log(data.toString());
});
ssh.stderr.on('data', function (data) {
console.error(data.toString());
});
ssh.on('exit', function (code) {
process.exit(code);
});
There's two ways to go about this if you want to pipe the process.stdin to the child process:
Child processes have a stdin property that represents the stdin of the child process. So all you should need to do is add process.stdin.pipe(ssh.stdin)
You can specify a custom stdio when spawning the process to tell it what to use for the child process's stdin:
child_proc.exec(command, { env: process.env, stdio: [process.stdin, 'pipe', 'pipe'] })
Also, on a semi-related note, if you want to avoid spawning child processes and have more programmatic control over and/or have more lightweight ssh/sftp connections, there is the ssh2 module.
I am running my node.js server by forever and my script gets killed in 1-2 days and i get this error in the log file:
error: Forever detected script was killed by signal: SIGSEGV
Now i have many functions in my node.js script. Upon writing a console.log at the beginning of each function i ended up getting this in the log:
info: transport end (undefined)
debug: set close timeout for client CbU1mvlYaIvDWHB4ChQa
debug: cleared close timeout for client CbU1mvlYaIvDWHB4ChQa
disconnection function
debug: discarding transport
debug: clearing poll timeout
debug: client authorized
info: handshake authorized 2O3m1B3dGWFOJ4W9ChQc
error: Forever detected script was killed by signal: SIGSEGV
the log makes it seem as if either the connect or the disconnect function has a problem, but as the script seg faults after 2 days of running and over 10000 connections/disconnections i think that that might not be really the problem.
Here are my connection and disconnection functions. i also connect to my pgsql database via node-dbi:
var DBWrapper = require('node-dbi').DBWrapper;
var DBExpr = require('node-dbi').DBExpr;
var dbConnectionConfig = { host: 'localhost', user: 'user', password: 'pass', database: 'dbname' };
dbWrapper = new DBWrapper( "pg", dbConnectionConfig );
dbWrapper.connect();
io.sockets.on('connection', function(socket) {
console.log("socket connection");
socket.on('set username', function(userName) {
var milliseconds = (new Date).getTime();
var data = { socketid: socket.id, time: milliseconds };
dbWrapper.insert('all_sockets', data , function(err) {
});
});
socket.on('disconnect', function() {
console.log("disconnection function");
dbWrapper.remove('all_sockets', [['socketid=?', socket.id]] , function(err) {} );
});
});
where could the segment fault be coming from?
I would recommend using a segfault handler to determine the STDERR. This way you will have some more useful debug info.
You can find one here