Run vuejs development server with SSL (to serve over HTTPS) - javascript

Important Detail & Workaround: I've come across this: "Deprecating Powerful Features on Insecure Origins"
This explains that HTTPS is enforced on external hosts. I have my development environment on my laptop and, on the weekend I SSH into that box, which is why I ran into this problem yesterday. I run the vuejs dev server remotely on the laptop, making it listen to 0.0.0.0 and open the page on my desktop. This causes the problem.
I've tried using SSH port forwarding to localhost. This worked and is an acceptable workaround for me.
The original question still remains valid. I will leave it open for now.
I'm working with a JS API which requires SSL (WebRTC). So to do development, I need to run the dev server over HTTPS. How can I do that with vuejs?
I've quickstarted the project using webpack. I found some links explaining how to run webpack-dev-server over SSL but I don't know how to do that with a vuejs application. I'm incredibly green considering everything that's JavaScript & NPM. The webpack links all mention a config file, but there is no such file in my project. The closest I see is the "main.js" but there is absolutely no configuration in there.
In essence, what I have is the result of the following steps:
mkdir demo
cd demo
npm install --save-dev vue-cli
./node_modules/.bin/vue init vuetifyjs/webpack-advanced demo
# Use the defaults here (except for "Vue build" I used "Runtime-only")
cd demo
npm install
npm run dev # <-- This is the command I would like to use SSL in

I don't know if you still have this problem or if any other person still encounters it but I found a solution.
Follow the instruction above to generate an openssl key and cert in your working folder.
In /node_modules/webpack-dev-server/bin/webpack-dev-server.js change this line from:
key: {
type: 'string',
describe: 'Path to a SSL key.',
group: SSL_GROUP
},
cert: {
type: 'string',
describe: 'Path to a SSL certificate.',
group: SSL_GROUP
},
to:
key: {
type: 'string',
describe: fs.readFileSync('key.pem'),
group: SSL_GROUP
},
cert: {
type: 'string',
describe: fs.readFileSync('cert.pem'),
group: SSL_GROUP
},
then set
argv.https = true;
That is all I had to do to have my code served from https.
Note that the command line will still read http://localhost:8080, but when you use https in the browser, your app will be displayed after warning from the browser

Requirement openssl installed :
First we have to generate SSL certificat based on a key made by openssl and without pass phrase cos this will generate an error.
nodejs https>node server.js
_tls_common.js:87
c.context.setKey(options.key);
^ Error: error:0907B068:PEM routines:PEM_READ_BIO_PRIVATEKEY:bad password read ...
Go inside your project start to create key & certificat :
openssl req -nodes -new -x509 -keyout key.pem -out cert.pem -days 365
-nodes : Don't encrypt the private keys at all.
Install the packages needed for your project : (--save to add to package.json)
npm install express --save
npm install https --save
npm install fs --save
now create the server file :
touch server.js
nano server.js
Copy/Paste : to server.js
var fs = require('fs');
var https = require('https');
var app = require('express')();
var options = {
key : fs.readFileSync('key.pem'),
cert : fs.readFileSync('cert.pem')
};
app.get('/', function (req, res) {
res.send('Hello World!');
});
https.createServer(options, app).listen(3000, function () {
console.log('Started!');
});
In this cas we don't use 443 port because is already used by services, so i use the port 3000 unused by any app...

Related

typescript, javascript, angular, nginx, alpine, docker communications in the network via nginx i think i missed something. looking for a review

once i migrated to docker to have a virtual network to simulate an atual network (bridge type with dns which works . the fqdn is resolved correctly to referrring ip) the following errors appeared in the console.log AND no data is displayed on the frontend website.
ERROR Error: NG0901
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend:4000/crafts. (Reason: CORS request did not succeed). Status code: (null).
ERROR
Object { headers: {…}, status: 0, statusText: "Unknown Error", url: "http://backend:4000/crafts", ok: false, name: "HttpErrorResponse", message: "Http failure response for http://backend:4000/crafts: 0 Unknown Error", error: error }
thats the browser's (firefox) console.log
i think nginx is doing things with the headers and or the body is empty due to serversides configs with nginx
on local host everything worked out fine
so im on the config of gninx but so far without any success.. i read about similar problems but couldnt find a solution myself OR the answers read didnt work with my setup.
i tries to change the ip to 0.0.0.0 to make it accessable in the network
oh AND im using nodejs expressjs
app.listen(port,ip)
I use a Dockerfile and docker-compose.yml to make the images, i use a powershell script to compose the images
what i suspect to cause the problem is:
backend:
index.js is run anbd looks like that
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const express_1 = __importDefault(require("express"));
const Routes_1 = __importDefault(require("./Routes"));
const app = (0, express_1.default)();
app.use(function (req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "*");
res.header("Access-Control-Allow-Methods", "PUT,POST,GET,DELETE,OPTIONS");
next();
});
// middleswares
app.use(express_1.default.json());
app.use(express_1.default.urlencoded({ extended: false })); //changed to see wheater it would effect the package isssue- should allow
app.use(Routes_1.default);
app.listen(4000,'0.0.0.0'); // or fqdn 'frontend'
console.log('server on port', 4000);
this is generated from index.ts and a build command
the referrring dockerfile:
FROM node:alpine as builder
WORKDIR /app/
COPY . /app/
COPY package.json /app/
COPY package-lock.json /app/
RUN cd /app/
RUN npm install -g
RUN npm update express
RUN npm install pg
FROM nginx:alpine
COPY --from=builder ./app/dist ./usr/share/nginx/html/
EXPOSE 3999-6001
CMD ["nginx", "-g", "daemon off;"]
RUN apk add --update nodejs
RUN apk add --update npm
after the image runs i open the terminal and run in the usr/share/gninx/html directory :
npm i express
npm i pg
node index.js
then I install vim
and edit the nginx.config like that
vi /etc/nginx/nginx.conf
i add a server directory, make it listen to the fqdn 'frontend' or its referring IP and the port 4000
listen ip:port kind of syntax
i add error and access logs earlier on and it doesn't return problems besides sometimes it says that IP are not available. im lacking on the understanding on how to interpret that
the PostgreSQL is also running in a docker container by the default port 5432 and the fqdn database which is also properly resolvable
same as the backend's fqdn
there is so much more stuff that links the short pieces of code that i have.. feel free to request more if interested or if u think it'd be required to find out whats going wrong.
I learnt my lesson..
servers listen to their own IPs, or at their localhost.
so i had a misconception there. though thanks to the pple taking a look inside here.
also a nodejs expressjs server doesn't necessarily need nginx to run on.. node is enough.. for the purpose..
fixing these two things led functionality as designed :)
so this can be closed or used as reminder on these two things:
understanding the conceptional idea of how networks work
AND
understanding the tech-stack being used and how it works
else
/closed

Error 503 Service unavailable when using Apache with a Node app [duplicate]

I am trying to get a brand new cloud based server working with a default version of 20.04 server ubuntu working with apache and node. The node server appears to be running without issues reporting 4006 port is open. However I believe my apache config is not. The request will hang for a very very long time. No errors are displayed in the node terminal. So the fault must lie in my apache config seeing as we are getting the below apache errors and no JS errors.
Request error after some time
502 proxy error
Apache Error Log
[Sun Oct 17 20:58:56.608793 2021] [proxy:error] [pid 1596878] (111)Connection refused: AH00957: HTTP: attempt to connect to [::1]:4006 (localhost) failed
[Sun Oct 17 20:58:56.608909 2021] [proxy_http:error] [pid 1596878] [client 207.46.13.93:27392] AH01114: HTTP: failed to make connection to backend: localhost
vhost
<VirtualHost IP_ADDRESS:80>
ServerName api.aDomain.com
Redirect permanent / https://api.aDomain.com/
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost IP_ADDRESS:443>
ServerName api.aDomain.com
ProxyRequests on
LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so
LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so
ProxyPass / http://localhost:4006/
ProxyPassReverse / http://localhost:4006/
#certificates SSL
SSLEngine on
SSLCACertificateFile /etc/ssl/api.aDomain.com/apimini.ca
SSLCertificateFile /etc/ssl/api.aDomain.com/apimini.crt
SSLCertificateKeyFile /etc/ssl/api.aDomain.com/apimini.key
ErrorLog ${APACHE_LOG_DIR}/error_api.aDomain.com.log
CustomLog ${APACHE_LOG_DIR}/access_api.aDomain.com.log combined
</VirtualHost>
</IfModule>
terminal output
[nodemon] 1.19.4
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `babel-node -r dotenv/config --inspect=9229 index.js`
Debugger listening on ws://127.0.0.1:9229/c1fcf271-aea8-47ff-910e-fe5a91fce6d2
For help, see: https://nodejs.org/en/docs/inspector
Browserslist: caniuse-lite is outdated. Please run next command `npm update`
🚀 Server ready at http://localhost:4006
Node server
import cors from 'cors'
import scrape from './src/api/routes/scrape'
const express = require('express')
const { ApolloServer, gql } = require('apollo-server-express')
const { postgraphile } = require('postgraphile')
const ConnectionFilterPlugin = require('postgraphile-plugin-connection-filter')
const dbHost = process.env.DB_HOST
const dbPort = process.env.DB_PORT
const dbName = process.env.DB_NAME
const dbUser = process.env.DB_USER
const dbPwd = process.env.DB_PWD
const dbUrl = dbPwd
? `postgres://${dbUser}:${dbPwd}#${dbHost}:${dbPort}/${dbName}`
: `postgres://${dbHost}:${dbPort}/${dbName}`
var corsOptions = {
origin: '*',
optionsSuccessStatus: 200, // some legacy browsers (IE11, various SmartTVs) choke on 204
}
async function main() {
// Construct a schema, using GraphQL schema language
const typeDefs = gql`
type Query {
hello: String
}
`
// Provide resolver functions for your schema fields
const resolvers = {
Query: {
hello: () => 'Hello world!',
},
}
const server = new ApolloServer({ typeDefs, resolvers })
const app = express()
app.use(cors(corsOptions))
app.use(
postgraphile(process.env.DATABASE_URL || dbUrl, 'public', {
appendPlugins: [ConnectionFilterPlugin],
watchPg: true,
graphiql: true,
enhanceGraphiql: true,
})
)
server.applyMiddleware({ app })
//Scraping Tools
scrape(app)
const port = 4006
await app.listen({ port })
console.log(`🚀 Server ready at http://localhost:${port}`)
}
main().catch(e => {
console.error(e)
process.exit(1)
})
Apache Mods Enabled
/etc/apache2/mods-enabled/proxy.conf
/etc/apache2/mods-enabled/proxy.load
/etc/apache2/mods-enabled/proxy_http.load
Updated Error Logs
[Thu Oct 21 10:59:22.560608 2021] [proxy_http:error] [pid 10273] (70007)The timeout specified has expired: [client 93.115.195.232:8963] AH01102: error reading status line from remote server 127.0.0.1:4006, referer: https://miniatureawards.com/
[Thu Oct 21 10:59:22.560691 2021] [proxy:error] [pid 10273] [client 93.115.195.232:8963] AH00898: Error reading from remote server returned by /graphql, referer: https://miniatureawards.com/
In major situations this is caused by selinux (when you have RHEL or CentOS):
# setsebool -P httpd_can_network_connect 1
link: https://unix.stackexchange.com/questions/8854/how-do-i-configure-selinux-to-allow-outbound-connections-from-a-cgi-script
Also check:
connectivity between the machines
back-end port is open
Use static IP-address (IPv4) or use host-name that are in you /etc/hosts file
I cannot exactly predict what exactly happen it could be NodeJS app crushed and no longer running or there are misconfiguration Apache files. But I strongly believe this scenario will be solved from doing things back from the top.
This step would go through updating unbuntu packages, installing needed application, configuring Apache files and setting up reverse proxy with NodeJS and Apache.
Just don't touch your NodeJS files and other code related application and they will be safe. You may also backup just to make sure. Other running application on that ubuntu server example database application like MySQL as will be just fine and still be running.
1. First we need to update ubuntu packages and install Apache, and NodeJS
$ sudo apt update
$ sudo apt install apache2 npm
2. Run this command to enable us to use Apache as a reverse proxy server
sudo a2enmod proxy proxy_http rewrite headers expires
3. Create an Apache virtual host file.
This command would will let you use ubuntu terminal as your text editor follow the guide and prompt from the terminal to write.
NOTE:
Change the "yourSite.com" with the domain of your site. It isn't really important should be the name of the file. But I think its better to name it after your site domain so you can recognize it.
$ sudo nano /etc/apache2/sites-available/yourSite.com.conf
4. Use the nano editor is to write your Apache config file for your site.
Notice: This part is critical so please pay attention
Change your ServerName and ServerAlias with your site domain name.
The ProxyPass and the ProxyPassReverse this has two parameters.
The first one is a back-slash "/" This an absolute path where your NodeJS should be located and since its single back-slash that means its your home directory.
The second one is the url "http://127.0.0.1:3000/" of your NodeJS application. Pay attentions to its PORT "3000" you may need to replaced it with the PORT you use in your NodeJS app.
<VirtualHost *:80>
ServerName example.com // replace this with site domain name without www at the beginning
ServerAlias www.example.com // replace this with site domain name beginning with www. + yourdomainname + .com
ProxyRequests Off
ProxyPreserveHost On
ProxyVia Full
<Proxy *>
Require all granted
</Proxy>
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:30000/
</VirtualHost>
5. disable the default Apache site and enable the new one.
$ sudo a2dissite 000-default
$ sudo a2ensite example.com.conf
6. Restart your Apache Server to apply the changes
sudo systemctl restart apache2
We could be ready at this point as we done setting up Apache as a reverse proxy, But we also need to install the npm package of your project and then run your NodeJS application.
7. The rest of the step is all related to NodeJS deployment. You may be already know this steps.
// install npm packages
npm install
// for a better experience using NodeJS in production install pm2 globally
npm install -g pm2
// Then run your NodeJS application using pm2 command
pm2 start // you should be at root of your NodeJS project folder when running this command
// run this another pm2 command to make sure your NodeJS app will re-run when it encounter downtime.
$ pm2 save
$ pm2 startup
Your Apache and NodeJS server is up and running now
Try to access your site by typing entering your site domain name in the browsers address bar
e.g http://yourSite.com
If you use a docker for your node server, then it might be set up incorrectly
I'm not an expert on this topic, but I have a similar setup; I use socket.io to serve WebSockets...
From your posts it seems you don't need to proxy WebSockets as well, the one shown in your logs seems to be only for debugging purposes (please correct me if I'm wrong).
Following the core of my Apache configuration:
RewriteEngine On
RewriteCond %{REQUEST_URI} ^/socket.io [NC]
RewriteCond %{QUERY_STRING} transport=websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:4006/$1 [P,L]
<Location />
ProxyPass http://127.0.0.1:4006/ retry=2
ProxyPassReverse http://127.0.0.1:4006/
</Location>
Another couple of suggestions.
Warning
Do not enable proxying with ProxyRequests until you have secured your server. Open proxy servers are dangerous both to your network and to the Internet at large.
Source: https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxyrequests
I don't know which is the IPV6 setup on your host, you could try to use 127.0.0.1 rather than localhost in you Apache configuration to try forcing Apache to use IPV4.

Use puppeteer with imgui-js

In case the lenght of the question might be scary, the summary of the question is how to interact with a front end app from a node server. Puppeteer usage should come along with that request solved I believe. Question is large because I explained all my failed attempts to achieve backend code (puppeteer) work in the browser. Apart from building and running the repo that although its easy right following the instructions might take a some time, I believe the question should be feasable for a javascript/node regular programmer. There it goes, thanks for reading.
I cloned, built and ran imgui-js repository succesfully.
I want to use it along with puppeteer for a small app. All the npm commands inside and stuff tried are inside the mentioned imgui-js project.
I tried:
1.- Run the node example from the project: With npm run-script start-example-node.
This runs the example/index.js script, but nothing is drawn as we are not in the browser and the window is undefined. Can be checked debugging in the main.ts:
if (typeof(window) !== "undefined") {
window.requestAnimationFrame(done ? _done : _loop);
}
So I do not understand the purpose of this example in the repo.
Edit: Seems it can be to have the client-server comunication done, but I do not now how to do this.
2.- Puppeteer browserify:
I followed the browserify hello world.
Just a summary of the steps:
npm install -g browserify
npm i puppeteer
Go to the build folder to generate de bundle.js for my const puppeteer = require('puppeteer'); script, so cd example, cd build, browserify myScript.js -o bundle.js
Add <script src="./build/bundle.js"></script> to the example/index.html.
I obtain this error:
Uncaught TypeError: System.register is not a function
at Object.96.puppeteer (bundle.js:19470:8)
at o (bundle.js:1:265)
at r (bundle.js:1:431)
at bundle.js:1:460
I also tried browserifying main.js along with my script: browserify main.js myScript.js -o bundle.js. Same error.
3.- Try to setup puppeter with the rollup module bundler: following this resource among others. So doing:
npm install --save-dev rollup tape-modern puppeteer
npm install --save-dev rollup-plugin-node-resolve
npm install --save-dev rollup-plugin-commonjs
npm install --save-dev sirv tape-browser-color
And tried to add that the the imgui-js rollup.config.js configuration file.
Think its not working because all the server setup at the npm start and so on is not performed with rollup.
4.- Puppeteer-web: Following the steps of this resource I tried to run puppeteer in the browser.
npm i puppeteer-web
Code in the client and the server:
Client:
<script src="https://unpkg.com/puppeteer-web"></script>
<script>
const browser = await puppeteer.connect({
browserWSEndpoint: `ws://0.0.0.0:8080`, // <-- connect to a server running somewhere
ignoreHTTPSErrors: true
});
const pagesCount = (await browser.pages()).length;
const browserWSEndpoint = await browser.wsEndpoint();
console.log({ browserWSEndpoint, pagesCount });
</script>
Server (server.js script):
const httpProxy = require("http-proxy");
const host = "0.0.0.0";
const port = 8080;
async function createServer(WSEndPoint, host, port) {
await httpProxy
.createServer({
target: WSEndPoint, // where we are connecting
ws: true,
localAddress: host // where to bind the proxy
})
.listen(port); // which port the proxy should listen to
return `ws://${host}:${port}`; // ie: ws://123.123.123.123:8080
}
const puppeteer = require("puppeteer");
puppeteer.launch().then(async browser=>{
const pagesCount = (await browser.pages()).length; // just to make sure we have the same stuff on both place
const browserWSEndpoint = await browser.wsEndpoint();
const customWSEndpoint = await createServer(browserWSEndpoint, host, port); // create the server here
console.log({ browserWSEndpoint, customWSEndpoint, pagesCount });
})
Run server script: node server.js. Server seems properly created. Terminal log:
browserWSEndpoint: 'ws://127.0.0.1:57640/devtools/browser/58dda865- b26e-4696-a057-25158dbc4093',
customWSEndpoint: 'ws://0.0.0.0:8080',
pagesCount: 1
npm start (from new terminal to assure the created server does not terminate)
I obtain the error in the client:
WebSocket connection to 'ws://0.0.0.0:8080/' failed:
(anonymous) # puppeteer-web:13354
I just want to use puppeteer with this front end library together in my app, fetching data with puppeteer to display it the UI and provide the user input back to puppeteer.
My ideal solution would be number 1, where I would be able to use any npm package apart from puppeteer and communicate from the backend(node server) to the client (imgui user interface) back and forth.
Thanks for any help.
EDIT:
I more less achieved it with the node server solution server which is my desired scenario, with expressjs and nodemon, running a different server in the application and communicationg with the app. Now I would find more valuable any help on:
1.- The browserifying solution and or insight about why my attempts with this approach failed.
2.- The solution that keeps everything in the one same server, that would be the server that in the repo serves the html to the browser with "start-example-html": "http-server -c-1 -o example/index.html". Dont know if that is possible. Its because I would not lose the life loading etc if I serve both things with my expressjs server added by myself.
Kind of what Create React App does with Proxying API Requests
3.- As suggested in the comments, guidance or solution to make the server code render a window through node with the imgui output (npm start-example-node) of course would be a valid answer to the question.
Seems not quite correct to change the question conditions during the bounty with a bit of a broad new scenario, but now that conditions has changed so I try to make the most of the investment and the research already done in the topic, also due to my lack of expertise in the wev-dev module bundling configuration area, so bounty may be granted for the most valuable advice in any of the two topics mentioned above. Thanks for your understanding.

Grunt Node and Express Local Dev HTTPS Certificates

I'm trying to make a start at Service Workers and read you require to have an ssl cert.
I've Got an AngularJS 1.x application and a Node Express back end, and I run both independently so I I use grunt serve to run the front end on port 8443 and I use node app.js to run express which is on 7443.
note: I'm doing this on macOS
I used the guide on how to set up https on a project that uses Grunt: here
openssl genrsa -out livereload.key 1024
openssl req -new -key livereload.key -out livereload.csr
openssl x509 -req -in livereload.csr -signkey livereload.key -out livereload.crt
Gruntfile.js
options: {
protocol: 'https', // or 'http2'
port: 8443,
hostname: '0.0.0.0',
key: grunt.file.read('livereload.key'),
cert: grunt.file.read('livereload.crt')
},
node app.js
var privateKey = fs.readFileSync('../livereload.key', 'utf8');
var certificate = fs.readFileSync('../livereload.crt', 'utf8');
var credentials = {key: privateKey, cert: certificate};
httpsServer.listen(7443, config.ip, function () {
console.log('Express server listening on %d, in %s mode', 7443, app.get('env'));
});
Both start with no errors, the front end does complain the connection is not private. When my front end tried to hit an endpoint on the express server I receive the following;
OPTIONS https://localhost:7443/api/census/general net::ERR_INSECURE_RESPONSE
Could someone please assist on this problem of mine.
You have created a self-signed certificate, which is fine for development and testing but is considered unsafe for general use. Unlike SSL certificates purchased from reputable third-parties, self-signed certificates are untrusted by default.
You will need to tell your OS to explicitly trust the certificate. I'm unfamiliar with Mac OS but this question was previously answered on SuperUser.

NodeJs tls.createServer equivalent to Apache SSLCertificateChainFile?

I have an Apache config for SSL like so:
SSLCertificateFile ~/certs/server.crt
SSLCertificateKeyFile ~/certs/server.key
SSLCertificateChainFile ~/certs/bundle.crt
Now in my NodeJs server, I am using grunt with grunt-connect as the server.
The documentation for grunt-connect says that it can be configured using the following syntax.
grunt.initConfig({
connect: {
server: {
options: {
protocol: 'https',
port: 8443,
key: grunt.file.read('server.key').toString(),
cert: grunt.file.read('server.crt').toString(),
ca: grunt.file.read('ca.crt').toString()
},
},
},
});
I need this configuration to match my Apache configurations. It has a certificate file, and a key file, and also a bundle file.
Looking at the documentation for the tls.createServer in NodeJs,
I do not see an option that looks like it could be equivalent to SSLCertificateChainFile.
How can I make my NodeJs connect server mirror the same SSL configuration as my Apache server?
EDIT
I will also award the bounty to someone who can do this:
Create a SSCCE Gruntfile that demonstrates how to configure connect to accept a server certificate and bundle certificate.
You may try concatenating server.crt and ca.crt files in one file and using result in cert option. Don't use ca option, as per docs it is needed only 'if the client uses the self-signed certificate'.

Categories

Resources