I am attempting to write an ftp server in nodejs using the ftp-srv package, but I am having some issues. I have tried the ls and ep values for the file_format tag, but I keep receiving the error "Bad file stat formatter". I am not sure if the issue is that I am running on a windows machine, and the library only supports unix systems.
This is the code that initialises my server.
const server = new FtpSrv({
pasv_url: resolverFunction,
url: 'ftp://0.0.0.0:21',
log: logger,
file_format: 'ls',
anonymous: false,
});
These are the docs for ftp-srv for the file_format value:
https://github.com/autovance/ftp-srv#file_format
Related
once i migrated to docker to have a virtual network to simulate an atual network (bridge type with dns which works . the fqdn is resolved correctly to referrring ip) the following errors appeared in the console.log AND no data is displayed on the frontend website.
ERROR Error: NG0901
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend:4000/crafts. (Reason: CORS request did not succeed). Status code: (null).
ERROR
Object { headers: {…}, status: 0, statusText: "Unknown Error", url: "http://backend:4000/crafts", ok: false, name: "HttpErrorResponse", message: "Http failure response for http://backend:4000/crafts: 0 Unknown Error", error: error }
thats the browser's (firefox) console.log
i think nginx is doing things with the headers and or the body is empty due to serversides configs with nginx
on local host everything worked out fine
so im on the config of gninx but so far without any success.. i read about similar problems but couldnt find a solution myself OR the answers read didnt work with my setup.
i tries to change the ip to 0.0.0.0 to make it accessable in the network
oh AND im using nodejs expressjs
app.listen(port,ip)
I use a Dockerfile and docker-compose.yml to make the images, i use a powershell script to compose the images
what i suspect to cause the problem is:
backend:
index.js is run anbd looks like that
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const express_1 = __importDefault(require("express"));
const Routes_1 = __importDefault(require("./Routes"));
const app = (0, express_1.default)();
app.use(function (req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "*");
res.header("Access-Control-Allow-Methods", "PUT,POST,GET,DELETE,OPTIONS");
next();
});
// middleswares
app.use(express_1.default.json());
app.use(express_1.default.urlencoded({ extended: false })); //changed to see wheater it would effect the package isssue- should allow
app.use(Routes_1.default);
app.listen(4000,'0.0.0.0'); // or fqdn 'frontend'
console.log('server on port', 4000);
this is generated from index.ts and a build command
the referrring dockerfile:
FROM node:alpine as builder
WORKDIR /app/
COPY . /app/
COPY package.json /app/
COPY package-lock.json /app/
RUN cd /app/
RUN npm install -g
RUN npm update express
RUN npm install pg
FROM nginx:alpine
COPY --from=builder ./app/dist ./usr/share/nginx/html/
EXPOSE 3999-6001
CMD ["nginx", "-g", "daemon off;"]
RUN apk add --update nodejs
RUN apk add --update npm
after the image runs i open the terminal and run in the usr/share/gninx/html directory :
npm i express
npm i pg
node index.js
then I install vim
and edit the nginx.config like that
vi /etc/nginx/nginx.conf
i add a server directory, make it listen to the fqdn 'frontend' or its referring IP and the port 4000
listen ip:port kind of syntax
i add error and access logs earlier on and it doesn't return problems besides sometimes it says that IP are not available. im lacking on the understanding on how to interpret that
the PostgreSQL is also running in a docker container by the default port 5432 and the fqdn database which is also properly resolvable
same as the backend's fqdn
there is so much more stuff that links the short pieces of code that i have.. feel free to request more if interested or if u think it'd be required to find out whats going wrong.
I learnt my lesson..
servers listen to their own IPs, or at their localhost.
so i had a misconception there. though thanks to the pple taking a look inside here.
also a nodejs expressjs server doesn't necessarily need nginx to run on.. node is enough.. for the purpose..
fixing these two things led functionality as designed :)
so this can be closed or used as reminder on these two things:
understanding the conceptional idea of how networks work
AND
understanding the tech-stack being used and how it works
else
/closed
I am trying to convert a firebase-queue worker to send push notification to a cloud function. I am using node-apn to send push notification to iOS devices. It requires setting up a connection which requires me to specify a key.pem file and cert.pem file. These files are present at the same location where the worker js file is present and works without any problem. I moved over the code to a cloud function but I get this error in the Logs console
{ Error: ENOENT: no such file or directory, open './cert.pem'
at Error (native)
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: './cert.pem' } 'Unable to send push notification to iOS device. Socket Error'
Below is how the files are specified and the connection is created in the code
var connectionOptions = {
cert:'./cert.pem',
key:'./key.pem',
production: true
};
var apnConnection = new apn.Connection(connectionOptions);
I have tried specifying the cert file as ./cert.pem and cert.pem but I get a similar error in both the cases. I guess the problem is that the .pem files are not shipped along with the functions.
How can I specify such files in a cloud function?
Your path reference isn't quite right for firebase functions.
It should be:
var connectionOptions = {
cert:__dirname + '/cert.pem',
key:__dirname + '/key.pem',
production: true
};
I am new to this node.js and using module simple-ssh for executing shell commands from my windows to remote server.
Whenever I run my code, the console curses me with [Error: Authentication failure. Available authentication methods: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive].
I have already put an RSA private-key file, and set the Windows ENV variable SSH_AUTH_SOCK. But still it keeps giving the error.
Below is the code snippet which I wrote for simple-ssh:
var ssh = new SSH({
host: sshHost,
user: 'root',
timeout: 11000000,
key: require('fs').readFileSync("D:/Keys_pair_prvt_pub/rsa_key"),
agent: process.env.SSH_AUTH_SOCK,
agentForward: true
});
When I am trying to SSH the remote from my windows cmd-prompt, it is giving me error:
$> ssh -A <myRemote.host.com>
ssh : The term 'ssh' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the path is correct and try again.
Am I missing anything? If yes, then how do I overcome this ?
Any help will be appreciated :)
Thanks.
Your key needs to be added to ~/.ssh/authorized_keys on the server for your key to work.
mscdex npm package SSH2, which simple-ssh wraps, has this in the docs for using ssh.agent on windows:
`agent - string - Path to ssh-agent's UNIX socket for ssh-agent-based user authentication. Windows users: set to 'pageant' for authenticating with Pageant or (actual) path to a cygwin "UNIX socket."
First time with linux and meteor up, so sorry if there's a stupid mistake. I try to deploy the meteor example app todos with mupx, and followed the instructions from the readme, but I'm getting the following mistake. (I'm using Ubuntu 14.04 LTS Server ). Thanks for help.
Configuration file : mup.json
Settings file : settings.json
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Meteor app path : /home/jan/todos
Using buildOptions : {}
Currently, it is only possible to build iOS apps on an OS X system.
Started TaskList: Deploy app 'todos' (linux)
[h2544161.stratoserver.net] - Uploading bundle
[h2544161.stratoserver.net] - Uploading bundle: SUCCESS
[h2544161.stratoserver.net] - Sending environment variables
[h2544161.stratoserver.net] - Sending environment variables: SUCCESS
[h2544161.stratoserver.net] - Initializing start script
[h2544161.stratoserver.net] - Initializing start script: SUCCESS
[h2544161.stratoserver.net] - Invoking deployment process
Invoking deployment process: FAILED
-----------------------------------STDERR-----------------------------------
Failed to remove container (todos-frontend): Error response from daemon: No such container: todos-frontend
docker: Error response from daemon: failed to create endpoint todos on network bridge: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use.
-----------------------------------STDOUT-----------------------------------
todos
base: Pulling from meteorhacks/meteord
518dc1482465: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
537c534356b6: Already exists
b65a0e1e554b: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
Digest: sha256:b5a4f6efa98e4070792ed36d33b14385a28e6ceda691a492ee5b9f2431b1515a
Status: Image is up to date for meteorhacks/meteord:base
d6d192579495851d5817288ff89abb69512562d7c2a7075f965484e64583c61b
Failed to remove container (todos-frontend): Error response from daemon: No such container: todos-frontend
docker: Error response from daemon: failed to create endpoint todos on network bridge: Bind for 0.0.0.0:80 failed: port is already allocated.
Just had the same issue,
finally deployed after changing file port number to an unused port in my-deployment mup.json somehow docker service could release ports automatically when it wants. I've used 80, 8000, 8001 so far but I haven't successfully deployed to the same port twice, but reading
credit to this
It seems that different deployments may conflict each other pretty easily. I have no resolution for this.
I am trying to set up a GruntFile.js file to automate the process of logging on to my personal website's server via ssh and pulling the latest version of the git repo. The relevant part of my grunt file looks like this:
sshconfig: {
portfolioServer: {
host: 'mySite.com',
username: 'root',
agent: process.env.SSH_AUTH_SOCK,
}
},
sshexec:{
deploy:{
command: [
'cd portfolio',
'git pull'
].join(' && ')
},
options:{
config: 'portfolioServer'
}
},
However, when I run the associated task (I named it "grunt deploy"). I get the following error.
Running "sshexec:deploy" (sshexec) task
Warning: Connection :: error :: Error: Authentication failure. Available authentication methods: publickey,password Use --force to continue.
Aborted due to warnings.
My understanding is that this error means that I have not set up the public/private ssh keys correctly. However, I have already gone through the process of setting up public/private keys.I am already able to run the following command through git bash and log in successfully:
ssh root#mySite.com
I have searched online for this problem and it seems like it might have something to do with process.env.SSH_AUTH_SOCK not behaving in git bash on windows in the same way that it might be expected to be have in a native linux distribution.
What further steps in my setup do I have to take in order to make this deployment configuration work?