web ressources don't load on Beanstalk - https instead of http header - javascript

Issue
So i created a simple http web server with node.js and express (mostly its just the skeleton from the express application generator). Then i uploaded the server to an AWS Beanstalk web environment.
The issue is that i can't load the ressources (CSS and javascript) from the server if i connect to it.
i get a
net::ERR_CONNECTION_TIMED_OUT
for loading all ressources if i open the site in my browser.
I assume the issue is that the get request on beanstalk uses a "https" url,
Request URL: https://...elasticbeanstalk.com/javascripts/GameLogic.js
Because it works on my localhost but there it uses a "http" url.
Request URL: http://localhost:3000/javascripts/GameLogic.js
Also the html site itself loads (after the timeout of the ressources) but this also uses a "http" request
Request URL: http://....elasticbeanstalk.com/
can you change the header request url (for CSS, JS) in AWS Beanstalk Web-Environment to use http instead of https? Or change it in HTML or on Node.js?
info
The server uses the node.js helmet module.
Then i just send my html page on incoming requests:
app.get('/', function(req, res, next) {
res.sendFile(path.join(__dirname + "/public/main.html")); //Um bei / als pfad die main.html zu geben
});
In the html page i request the ressources:
<link rel="stylesheet" type="text/css" href="\stylesheets\style.css">
<script src="/javascripts/jquery.min.js"></script>
<script src="/javascripts/GameLogic.js"></script>
solution attempts
I have tried not using helmet, but that version behaves exactly the same and doesn't load ressource if it is on beanstalk (on localhost server it always worked).
I also tried chaning some security group rules to allow https on port 443 from all sources to the loadbalancer and https on port 443 from the loadbalancer to the ec2 instance. Situation didn't change.
then i tried redirecting https requests to http
app.get('/', function(req, res, next){
console.log("redirect to http?");
res.redirect('http://' + req.headers.host + req.url);
});
But then the site doesn't even load the html because of "to many redirects".
So currently im out of ideas on how to make the https request work or how to change it to an http request.
note
I am also using a student account so i have no rights to use the AWS Certificate Manager or redirect ELBs to HTTPS, if that has anything to do with it.

OK it was the helmet module for node.js at fault. I dont undertand it good enough to say what exactly changed the headers to https, hsts seems to be only part of it.
But after completly removed it, i and other people could access the web app and load the ressources.
The reason why i didn`t catch it earlier in my tests:
I never deleted or regenerated my package-lock after removing helmet. So it was still in there. Now i uninstalled it and made a new package-lock before uploading to AWS Beanstalk.

Related

single html page on disk request for localhost CORS error

everyone:
I am a beginner on fort-end, create a single html file on disk.For example,"D:/index.html".
Then I start a node.js server。
Run the node.js server, and open the browser on page of "index.html", CORS error occur.
I know add js code " res.setHeader( "Access-Control-Allow-Origin", "*" );" on node.js server, it can avoid CORS error.
My question is "single html page located on disk, I think it also come from localhost, why request to localhost server can trigger CORS???"
when we say the same origin the port, protocol, and the host should be the same for both.
please check https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy

Cookie set by Flask app sent but not stored

I have a backend Flask app running on localhost:3000 and a React front-end app running on localhost:5000. In my backend app I am using Flask's 'Response.set_cookie' to set a cookie:
resp = make_response({}, 200)
resp.set_cookie('my_cookie_name', 'my_val', max_age=604800, domain='127.0.0.1', samesite='Lax', secure=None, httponly=None)
I am also allowing cross-origin for all responses in my flask app as follows:
# Child class of Flask to override some features
class TailoredFlask(Flask):
# Override make_response
def make_response(self, rv):
# Call default version from partent
resp = super().make_response(rv)
# Add CORS header to every response
resp.headers["Access-Control-Allow-Origin"] = "*"
resp.headers["Access-Control-Allow-Methods"] = "GET,POST,OPTIONS,HEAD"
resp.headers["Access-Control-Allow-Headers"] = "Origin, X-Requested-With, Content-Type, Accept, Authorization"
return resp
My client accesses my flask cookie endpoint with a call to fetch.
In the Chrome dev tools I can see that the cookie is sent with the HTTP response from my backend. It is visible when on the Network->Cookies tab when I select the request to my backend. However, if I go to the Application tab in the dev tools, my cookie is not there.
It seems like chrome is silently discarding my cookie. I have seen several simiar issues here on SO but none of them seem to explain what is going on or provide a solution to my issue.
I'm also confused about the cookie options. There is a 'domain' option which I've read is to allow cross domain operation for the cookie. However, everything is running on localhost so I feel that I shouldn't need this unless the port is causing issues. However, I have also read that the port should not be included in the cookie 'domain' field.
If anyone can help to explain this to me I would greatly appreciate it because I'm just going round in circles with this stuff.
One more thing to note: I am pointing the browser at 'localhost', but the API call to my backend and the cookie domain both use '127.0.0.1', since I've read elsewhere that the 'domain' field must have at least two dots in it. (I don't have a choice in the browser URL since I am using AWS cognito login UI to redirect to my app after login. Cognito allows http for 'localhost', but only allows https for '127.0.0.1' so I have to use 'localhost' for development.) Could the missmatch between the browser url and cookie domain be causing this issue? Or is there something else that I'm missing?
Ok, so I think I now understand what's going on here, although I don't think there's a fix for my specific problem. As described in this thread browsers (including Chrome) will not allow a domian of 'localhost' within a cookie (I just wish there was a message in the console or something to indicate why the cookie is not being saved, rather than a silent fail!)
There are various suggestions for workarounds, such as using '.app.localhost' to access the application. Unfortunately this is not an option for me as I am redirecting to my front-end app from AWS Cognito, and the only domain that is supported with HTTP (rather than HTTPS) is 'localhost'. Variants such as '.app.localhost' or '127.0.0.1' are not allowed.

Getting server certificates from https/ws server automatically

I'm hosting a website on git pages that opens a video stream on certain clicks over a secure websocket (wss) using this library: github.com/websockets/ws. When connecting video from an http page over a simple websocket (ws) everything was fine, but I realized the github page will be hosted over https and firefox does not allow unsecured websocket connections over https, wss is required.
To do that I followed the steps on the websockets ws page and the websocket server is sort of "hiding" behind an https server that does the certificate handshake. The problem is this: If I use my web page normally and open the link that starts a wss connection to the video it doesn't work, the server doesn't know it exists. If I first visit the server directly via IP and port using an https://IP:999 link it retrieves the certs and from then on the website works indefinitely.
Is there a way to do this naturally that doesn't require me trying to visit a headless server to do the cert handshake? I just want to open a wss connection but the overhead of having to visit a server directly for certs seems a bit bizarre.
The server setup looks like this:
const server = https.createServer({
cert: fs.readFileSync('./cert/cert.pem'),
key: fs.readFileSync('./cert/key.pem'),
}, function (req, res) {
console.log(new Date() + ' ' +
req.connection.remoteAddress + ' ' +
req.method + ' ' + req.url);
res.writeHead(200);
res.end("hello foobarbackend\n");
});
this.wsServer = new ws.Server({
server
})
this.wsServer.on("connection", (socket, request) => {
return this.onSocketConnect(socket, request)
})
server.listen(9999, '0.0.0.0');
Once the certs are retrieved via https://:9999 I can then play videos no problem on that browser at wss://:9999, I must be missing something.
The question here is answered entirely by the world of SSL/TLS. The issue here was that secure browsers will pretty much silently reject WSS:// connections to servers with self signed certificates. My certificates were self signed.
This is why the user would have to navigate to the server IP directly over HTTPs first and then accept the warnings. From there it's business as usual.
What needed to be done was register a domain name for the IP the server was located on (Droplet). Then certbot was used to generate real certificates (key, cert) for the domain. I replaced the cert.pem and key.pem above with the true generated ones. Domain name can be anything like mywebsitewhatever.app.
Now on the client side you can open up a connection to wss://mywebsitewhatever.app:9999 and the browser will accept it automatically and things work. No warnings or navigation to a warning page to accept.

NodeJS: Send HTTPS request but get HTTP

I am building a website using NodeJS, and I deploy it to Heroku. But when I open the website, something went wrong. Here is the problem:
Code:
In the main source file of my web:
app.get('/', (req, res) => {
var data = {
rootURL: `${req.protocol}://${req.get('Host')}`,
};
res.render('home.html', data);
});
Then, in home.html, I include the following script:
<script type="text/javascript">
$.getJSON('{{rootURL}}'+'/about', {}, function(data){
// Code here is deleted for now.
}).fail(function(evt) {
// Code here is deleted for now.
});
</script>
Here I use hbs template, so {{rootURL}} is equal to the 'rootURL' property within the 'data' object rendered along with the 'home.html' page.
The '/about' is one of the APIs I designed for my web. It basically sends back something about the website itself and this information is wrapped in JSON.
Then, here comes the problem. The code works fine locally, and works well when I send HTTP request instead of HTTPS to Heroku. But if I send HTTPS request to Heroku, I'll get 'Mixed Content' Errors:
Errors I get in Chrome Console.
I then switched to 'Elements' tab in the developers tool, and I saw this:
The schema is HTTP, not HTTPS!
I'm very confused here. I just grab the 'protocol' property within the 'req' object, and fill in the template with it. So, I'm assuming if I enter '[my-website-name].herokuapp.com' with 'https' schema in my Chrome Browser, my nodeJS app deployed on Heroku should get 'https' for req.protocol. But Apparently it's not the case. What is wrong here?
I assume you don't actually have an SSL certificate? Heroku will be providing the HTTPS, but it will then translate it to normal HTTP internally when it hits your express endpoint, which is why it sees req.protocol as HTTP.
Is there any point in even providing the URL to getJSON? Why not just send it $.getJSON('/about', callback) and let the browser handle that?
Also, you haven't hidden your URL in that first image you uploaded, if that's what you were intending.
Heroku router is doing SSL termination, so no matter if you connect via http or https, you get http on your side. Original protocol is however set in X-Forward-Proto header. You need use this value.

How can I use HTTPS in AngularJS?

I am using AngularJS, $resource & $http and working with apis, however due to security reason I need to make an HTTPS request (work under the HTTPS protocol).
What's the way to use https in AngularJS.
Thanks for your help.
For some reason Angular would send all requests over HTTP if you don't have a tailing / at the end of your requests. Even if the page itself is served through HTTPS.
For example:
$http.get('/someUrl').success(successCallback); // Request would go over HTTP even if the page is served via HTTPS
But if you add a leading / everything would work as expected:
$http.get('/someUrl/').success(successCallback); // This would be sent over HTTPS if the page is served via HTTPS
EDIT: The root cause of this problem is that Angular looks at the actual headers from the server. If you have an incorrect internal pass of http data through https there will still be http headers and Angular would use them if you do not add / at the end.
i.e. If you have an NGINX serving content through https, but passing requests to Gunicorn on the backend via http, you might have this issue. The way to fix that is to pass the correct headers to Gunicorn, so your framework would be under the impression of being served via https. In NGINX you can do this with the following line:
proxy_set_header X-Forwarded-Proto $scheme;
Use the $http api as you would normally:
$http.get('/someUrl').success(successCallback);
if your app is being served over HTTPS then any calls you are making are to the same host/port etc so also via HTTPS.
If you use the full URIs for your requests e.g. $http.get('http://foobar.com/somePath') then you will have to change your URIs to use https
I've recently run into similar issues using Angular 1.2.26, but only when interacting through a load-balancer - which may be stripping https-related headers...not sure of cause yet. I've resorted to this:
uri = $location.protocol() + "://" + $location.host() + "/someUrl"
You might want to add $location.port() also if using a non-standard port.

Categories

Resources