Reference a pem file in a cloud function - javascript

I am trying to convert a firebase-queue worker to send push notification to a cloud function. I am using node-apn to send push notification to iOS devices. It requires setting up a connection which requires me to specify a key.pem file and cert.pem file. These files are present at the same location where the worker js file is present and works without any problem. I moved over the code to a cloud function but I get this error in the Logs console
{ Error: ENOENT: no such file or directory, open './cert.pem'
at Error (native)
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: './cert.pem' } 'Unable to send push notification to iOS device. Socket Error'
Below is how the files are specified and the connection is created in the code
var connectionOptions = {
cert:'./cert.pem',
key:'./key.pem',
production: true
};
var apnConnection = new apn.Connection(connectionOptions);
I have tried specifying the cert file as ./cert.pem and cert.pem but I get a similar error in both the cases. I guess the problem is that the .pem files are not shipped along with the functions.
How can I specify such files in a cloud function?

Your path reference isn't quite right for firebase functions.
It should be:
var connectionOptions = {
cert:__dirname + '/cert.pem',
key:__dirname + '/key.pem',
production: true
};

Related

nodejs ftp-srv package "Bad file stat formatter" error

I am attempting to write an ftp server in nodejs using the ftp-srv package, but I am having some issues. I have tried the ls and ep values for the file_format tag, but I keep receiving the error "Bad file stat formatter". I am not sure if the issue is that I am running on a windows machine, and the library only supports unix systems.
This is the code that initialises my server.
const server = new FtpSrv({
pasv_url: resolverFunction,
url: 'ftp://0.0.0.0:21',
log: logger,
file_format: 'ls',
anonymous: false,
});
These are the docs for ftp-srv for the file_format value:
https://github.com/autovance/ftp-srv#file_format

Azure pipeline throws “Chrome failed to start: crashed.” error on npm test script command

I'm trying to run a test script in Azure DevOps pipelines and I've been struggling to get selenium to run Chrome. I always get the following error:
WebDriverError: unknown error: Chrome failed to start: crashed.
(unknown error: DevToolsActivePort file doesn't exist)
I've looked at many similar questions but no luck. This only happens on Azure DevOps pipelines. It works on my local and if I login into the server and locate the source code from the build agent, I can run "npm run test" successfully.
Here is the detailed error log from Azure DevOps:
Error Log
Below is the JavaScript code that is triggered when running the script:
const { Given, When, Then, AfterAll } = require('#cucumber/cucumber');
const { until, Builder, By, Capabilities } = require('selenium-webdriver');
const { expect } = require('chai');
// WebDriver Setup (for Chrome)
const capabilities = Capabilities.chrome();
const chrome = require('selenium-webdriver/chrome');
const chromeService = chrome.setDefaultService(new chrome.ServiceBuilder('chromedriver.exe').build())
const options = new chrome.Options();
options.addArguments('--headless');
options.addArguments('--no-sandbox');
options.addArguments('--disable-dev-shm-usage');
const driver = new Builder().withCapabilities(capabilities)
.setChromeOptions(options)
.setChromeService(chromeService)
.build();
Also, both chrome driver and the browser are using the same version.
Thanks for your help.
According to the error message:
WebDriverError: unknown error: Chrome failed to start: crashed.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location C:\Program Files(x86)\Google\Chrome\Application\chrome.exe is no longer running)
It implies that the ChromeDriver was unable to initiate/spawn a new WebBrowser i.e. Chrome Browser session.
Your main issue is the Chrome browser is not installed at the default location within your system.
The server i.e. ChromeDriver expects you to have Chrome installed in the default location for each system as per the image below:
You could check this ticket for more details.

fs.mkdirSync() make directory in remote server folder

In Node.js, I'm attempting to use fs.mkdirSync() to create a new directory in a remote server/folder structure '\\myserver.com'
My computer (Windows) seems to be connected to this server, have permissions to access it, and is viewable and editable in my File Explorer. fs.mkdirSync() seems to have no issue creating folders locally on my machine, but when I put the server path as the directory, it throws an error
Code:
fs.mkdirSync("\\\\myserver.com\\folder1\\folder2\\new folder")
Error:
{ Error: EPROTO: protocol error, mkdir '\\myserver.com\folder1\folder2\new folder'
at Object.fs.mkdirSync (fs.js:885:18)
at exports.create_new_folder (/file_calling_mkDir.js:107:6)
errno: -71,
code: 'EPROTO',
syscall: 'mkdir',
path: '\\myserver.com\folder1\folder2\new folder' }
What seems to be going wrong here? The folder does not get created and there isn't much documentation on using fs with server paths for me to refer to, making this difficult to solve. Any help would be massively appreciated.

How to set up SSH Agent Forwarding in node.js (simple-ssh module)?

I am new to this node.js and using module simple-ssh for executing shell commands from my windows to remote server.
Whenever I run my code, the console curses me with [Error: Authentication failure. Available authentication methods: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive].
I have already put an RSA private-key file, and set the Windows ENV variable SSH_AUTH_SOCK. But still it keeps giving the error.
Below is the code snippet which I wrote for simple-ssh:
var ssh = new SSH({
host: sshHost,
user: 'root',
timeout: 11000000,
key: require('fs').readFileSync("D:/Keys_pair_prvt_pub/rsa_key"),
agent: process.env.SSH_AUTH_SOCK,
agentForward: true
});
When I am trying to SSH the remote from my windows cmd-prompt, it is giving me error:
$> ssh -A <myRemote.host.com>
ssh : The term 'ssh' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the path is correct and try again.
Am I missing anything? If yes, then how do I overcome this ?
Any help will be appreciated :)
Thanks.
Your key needs to be added to ~/.ssh/authorized_keys on the server for your key to work.
mscdex npm package SSH2, which simple-ssh wraps, has this in the docs for using ssh.agent on windows:
`agent - string - Path to ssh-agent's UNIX socket for ssh-agent-based user authentication. Windows users: set to 'pageant' for authenticating with Pageant or (actual) path to a cygwin "UNIX socket."

Can't deploy todos; Failed to remove container (todos-frontend)

First time with linux and meteor up, so sorry if there's a stupid mistake. I try to deploy the meteor example app todos with mupx, and followed the instructions from the readme, but I'm getting the following mistake. (I'm using Ubuntu 14.04 LTS Server ). Thanks for help.
Configuration file : mup.json
Settings file : settings.json
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Meteor app path : /home/jan/todos
Using buildOptions : {}
Currently, it is only possible to build iOS apps on an OS X system.
Started TaskList: Deploy app 'todos' (linux)
[h2544161.stratoserver.net] - Uploading bundle
[h2544161.stratoserver.net] - Uploading bundle: SUCCESS
[h2544161.stratoserver.net] - Sending environment variables
[h2544161.stratoserver.net] - Sending environment variables: SUCCESS
[h2544161.stratoserver.net] - Initializing start script
[h2544161.stratoserver.net] - Initializing start script: SUCCESS
[h2544161.stratoserver.net] - Invoking deployment process
Invoking deployment process: FAILED
-----------------------------------STDERR-----------------------------------
Failed to remove container (todos-frontend): Error response from daemon: No such container: todos-frontend
docker: Error response from daemon: failed to create endpoint todos on network bridge: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use.
-----------------------------------STDOUT-----------------------------------
todos
base: Pulling from meteorhacks/meteord
518dc1482465: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
537c534356b6: Already exists
b65a0e1e554b: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
Digest: sha256:b5a4f6efa98e4070792ed36d33b14385a28e6ceda691a492ee5b9f2431b1515a
Status: Image is up to date for meteorhacks/meteord:base
d6d192579495851d5817288ff89abb69512562d7c2a7075f965484e64583c61b
Failed to remove container (todos-frontend): Error response from daemon: No such container: todos-frontend
docker: Error response from daemon: failed to create endpoint todos on network bridge: Bind for 0.0.0.0:80 failed: port is already allocated.
Just had the same issue,
finally deployed after changing file port number to an unused port in my-deployment mup.json somehow docker service could release ports automatically when it wants. I've used 80, 8000, 8001 so far but I haven't successfully deployed to the same port twice, but reading
credit to this
It seems that different deployments may conflict each other pretty easily. I have no resolution for this.

Categories

Resources