External Hello world server node js - javascript

I am looking to use Node.js to host a web server on a dedicated PC, but I cant seem to access it from anywhere besides my local network.
From what Ive found online, it seems like all I have to do is enter the externalIp:port in a browser on a different network and I should see my Hello World, but I cant get it to work without exposing my localhost through something like ngrok.
Does anyone know how could I access my node server from an external pc on the internet and not just localhost?
Here are the steps to reproduce
require("http").createServer(function(request, response){
response.writeHeader(200, {"Content-Type": "text/plain"});
response.write("Hello World!");
response.end();
}).listen(8080);
"node server.js" - very simple server hosting on port 8080 that just sends 'Hello World!' response
check localhost:8080 on my machine, see "Hello World!" working
get externalIP from ipchicken.com
check 'externalIP':8080 on external machine (phone, diff network pc), never works
Maybe there is something I am missing, but I thought this was pretty straightforward

So it turns out the router I was using from Optimum has a "special" settings page to allow Port 80 traffic. And there is no link to it anywhere on the port forwarding page. So after doing all your port forwarding, make sure you follow this link:
http://optimumdev.custhelp.com/app/answers/detail/a_id/2140/related/1

You need to bind the service to all available ip interfaces, change your listen 8080 to :
require("http").createServer(function(request, response){
response.writeHeader(200, {"Content-Type": "text/plain"});
response.write("Hello World!");
response.end();
}).listen(8080, "0.0.0.0")

Assuming you have a standard home network setup, you can't just send a request to your external IP address and expect it to magically reach your machine. The request will hit your router and the router won't know what to do with it. You need to configure your router to forward requests on port 8080 to your local address (probably something like 192.168.0.x). Even that may not be enough. You may need to configure a software firewall on your machine to allow incoming requests on that port as well. Hope this helps :)

Related

web ressources don't load on Beanstalk - https instead of http header

Issue
So i created a simple http web server with node.js and express (mostly its just the skeleton from the express application generator). Then i uploaded the server to an AWS Beanstalk web environment.
The issue is that i can't load the ressources (CSS and javascript) from the server if i connect to it.
i get a
net::ERR_CONNECTION_TIMED_OUT
for loading all ressources if i open the site in my browser.
I assume the issue is that the get request on beanstalk uses a "https" url,
Request URL: https://...elasticbeanstalk.com/javascripts/GameLogic.js
Because it works on my localhost but there it uses a "http" url.
Request URL: http://localhost:3000/javascripts/GameLogic.js
Also the html site itself loads (after the timeout of the ressources) but this also uses a "http" request
Request URL: http://....elasticbeanstalk.com/
can you change the header request url (for CSS, JS) in AWS Beanstalk Web-Environment to use http instead of https? Or change it in HTML or on Node.js?
info
The server uses the node.js helmet module.
Then i just send my html page on incoming requests:
app.get('/', function(req, res, next) {
res.sendFile(path.join(__dirname + "/public/main.html")); //Um bei / als pfad die main.html zu geben
});
In the html page i request the ressources:
<link rel="stylesheet" type="text/css" href="\stylesheets\style.css">
<script src="/javascripts/jquery.min.js"></script>
<script src="/javascripts/GameLogic.js"></script>
solution attempts
I have tried not using helmet, but that version behaves exactly the same and doesn't load ressource if it is on beanstalk (on localhost server it always worked).
I also tried chaning some security group rules to allow https on port 443 from all sources to the loadbalancer and https on port 443 from the loadbalancer to the ec2 instance. Situation didn't change.
then i tried redirecting https requests to http
app.get('/', function(req, res, next){
console.log("redirect to http?");
res.redirect('http://' + req.headers.host + req.url);
});
But then the site doesn't even load the html because of "to many redirects".
So currently im out of ideas on how to make the https request work or how to change it to an http request.
note
I am also using a student account so i have no rights to use the AWS Certificate Manager or redirect ELBs to HTTPS, if that has anything to do with it.
OK it was the helmet module for node.js at fault. I dont undertand it good enough to say what exactly changed the headers to https, hsts seems to be only part of it.
But after completly removed it, i and other people could access the web app and load the ressources.
The reason why i didn`t catch it earlier in my tests:
I never deleted or regenerated my package-lock after removing helmet. So it was still in there. Now i uninstalled it and made a new package-lock before uploading to AWS Beanstalk.

Getting server certificates from https/ws server automatically

I'm hosting a website on git pages that opens a video stream on certain clicks over a secure websocket (wss) using this library: github.com/websockets/ws. When connecting video from an http page over a simple websocket (ws) everything was fine, but I realized the github page will be hosted over https and firefox does not allow unsecured websocket connections over https, wss is required.
To do that I followed the steps on the websockets ws page and the websocket server is sort of "hiding" behind an https server that does the certificate handshake. The problem is this: If I use my web page normally and open the link that starts a wss connection to the video it doesn't work, the server doesn't know it exists. If I first visit the server directly via IP and port using an https://IP:999 link it retrieves the certs and from then on the website works indefinitely.
Is there a way to do this naturally that doesn't require me trying to visit a headless server to do the cert handshake? I just want to open a wss connection but the overhead of having to visit a server directly for certs seems a bit bizarre.
The server setup looks like this:
const server = https.createServer({
cert: fs.readFileSync('./cert/cert.pem'),
key: fs.readFileSync('./cert/key.pem'),
}, function (req, res) {
console.log(new Date() + ' ' +
req.connection.remoteAddress + ' ' +
req.method + ' ' + req.url);
res.writeHead(200);
res.end("hello foobarbackend\n");
});
this.wsServer = new ws.Server({
server
})
this.wsServer.on("connection", (socket, request) => {
return this.onSocketConnect(socket, request)
})
server.listen(9999, '0.0.0.0');
Once the certs are retrieved via https://:9999 I can then play videos no problem on that browser at wss://:9999, I must be missing something.
The question here is answered entirely by the world of SSL/TLS. The issue here was that secure browsers will pretty much silently reject WSS:// connections to servers with self signed certificates. My certificates were self signed.
This is why the user would have to navigate to the server IP directly over HTTPs first and then accept the warnings. From there it's business as usual.
What needed to be done was register a domain name for the IP the server was located on (Droplet). Then certbot was used to generate real certificates (key, cert) for the domain. I replaced the cert.pem and key.pem above with the true generated ones. Domain name can be anything like mywebsitewhatever.app.
Now on the client side you can open up a connection to wss://mywebsitewhatever.app:9999 and the browser will accept it automatically and things work. No warnings or navigation to a warning page to accept.

Only allow computers on the same network using Express-ip-filter

So I'm using localtunnel to expose my ports over the internet, but I only want to let devices on the same network as the server access the server.
I'm using express-ip-filter to filter away anything that's on a different network. I tried a few things: first I tried using 192.168.1.0/24 as the only ips that could access the website, but that didn't work, as it didn't let anything in. I then tried using the ip I got from WhatsMyIp, but that wouldn't let any device in. I then found out that express-ip-filter spits out a message saying that a certain ip was not allowed and, on every device, independently on the network it was connected to, the address was 127.0.0.1. I tried confirming by only allowing 127.0.0.1, and then every device could access the server. Why would ip-filter only get 127.0.0.1 as ip? Here's my code as a reference:
// Init dependencies
var express = require('express'),
ipfilter = require('express-ipfilter').IpFilter
app = express()
// Blacklist the following IPs
var ips = ['192.168.1.0/24']
// Create the server
app.use(ipfilter(ips, { mode: "allow" }))
app.get('/', function (req, res) {
res.send('Hi')
})
app.listen(8080, () => console.log('Up'))
From my limited understanding of localtunnel it seems like it proxies users requests to you via the localtunnel software which causes all users to have the same IP. In laymans terms:
User connects to your site through localtunnel
localtunnel copies the users request and sends it to your computer
Your application receives the request but it looks like all traffic is coming from localtunnel because it's incredibly difficult if not impossible for localtunnel to imitate someone else's IP.
Why use localtunnel at all if you only want devices on the same network to connect, you don't need to do any port forwarding or DNS setup if you just want to access another machine on the same local network.
If you really do need to tunnel connections then there is a solution, not with localtunnel(Which as far as i can tell does not use forwading headers, although if someone knows if they do ill change my answer) but using https://ngrok.com instead which does exactly the same thing but also sends a little extra bit of data in every request which tells the application what the clients actual IP is.
Install ngrok
Run ngrok http -subdomain=(the subdomain you want) 80
Edit your application code to find the real client IP
var findProxyIP = function(req) {
var realIP = req.header('x-forwarded-for');
return realIP;
}
app.use(ipfilter(ips, {
mode: "allow",
detectIP: findProxyIP
}));
ngrok is much more complex and has a lot more features compared to localtunnel, however, it is freemium software and its free plan is quite limiting.

No need to set up a local server with Node.js?

I see when I want write a Node.js web application on my local machine, I don't need to set-up a local server using WAMP or MAMP. What is node.js really doing behind the scenes? I am providing this code to make a simple hello world web app:
var http = require("http");
http.createServer(function(request,response){
response.writeHead(200, {"content-type":"text/html"});
response.write("hello world");
response.end();
}).listen(8080);
console.log("server is running....");
When loading in my browser URL bar "localhost:8080" it works.
How this is working as and why don't I need a local server when working with Node.js?
You do have a local server... it's your Node.js application.
When you call http.createServer(), it creates an HTTP server. When you call .listen() on that server, it binds to the requested port, and optionally requested address, and listens for connections. When data comes in on those connections, it responds like any other HTTP server.
The HTTP server uses your request/response callback, firing it whenever a valid HTTP request comes in.
Because node comes out of the box with all the libraries you need to run a webserver, the http library that you are using is opening the 8080 port and handling the request with the function you provided
This part:
function(request,response){
response.writeHead(200, {"content-type":"text/html"});
response.write("hello world");
response.end();
}
No, you don't need it. Because node itself can be your webserver, just like in your example. Node is built on V8, which is chrome JavaScript engine .
Take a look a Express js module that gives you lots of features out of the box

event stream request not firing close event under passenger apache

So I have an event stream in my express js node application. Here's an overview:
app.get('/eventstream', function(req, res){
req.socket.setTimeout(Infinity);
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
});
res.write('/n');
req.on('close', function(){
console.log('connection closed');
});
}
On my local dev box, running from the command line with
node app.js
it works fine, and prints out 'connection closed' when i close my tab in my browser.
However, when running on my server, under Apache with Passenger, it doesn't print out any message - the server seems to not fire the 'close' event. I'm using this event to remove from my count of active users. Any ideas?
Cheers,
Dan
Phusion Passenger author here. The short answer is: technically, the connection hasn't closed yet.
Here's the long answer. If the client connects directly to your Node.js process, then yes, the connection is closed. But with Phusion Passenger there's a proxy in between the client and Node.js. The thing about sockets is that there are two ways to find out whether a socket has been closed: either 1) by reading end-of-file from it, or 2) by writing to it and getting an error. Phusion Passenger stops reading from the client as soon as it has determined that the request body has ended. And in case of GET requests, that is immediately after header. Thus, the only way Phusion Passenger can notice that the client has closed to the connection, is by sending data to it. But your application never sends any data after that newline, and so Phusion Passenger doesn't do that either and never notices that the connection is closed.
This issue is not limited to Phusion Passenger. If you put your Node.js app behind a load balancer or any other kind of reverse proxy, then you could also run into the same issue.
The standard solution is to regularly send "ping" messages, with the purpose of checking whether the connection is alive.
A similar issue also applies to WebSockets. It is the reason why the WebSocket supports ping frames.
UPDATE February 28, 2016:
We have found a solution, and Passenger 5.0.26 and later supports forwarding half-close events which fixes the problem described by #coffeedougnuts. Just use 5.0.26 and later and it'll work as expected.

Categories

Resources