Best ways to configure nginx to serve yeoman angular generator files? - javascript

I am quite new to configuring production servers to serve UI. I always wrote my code and someone else did the deployment. But I wish to move to the next stage now. Having said that, I wrote my UI code starting from yeoman-angular-generator and I wish to deploy it on my production Amazon ec2 instance.
I configured nginx on the instance and setup route53 and I am able to serve default 'Welcome to nginx' page from mydomain.com. What I wish to do is to serve my UI from mydomain.com. I tried to write a server block with location '/' pointing to my index.html from my dist folder. But it is not working.

I usually set up something like this:
server {
listen 80;
server_name your.project.com
location / {
try_files $uri /index.html;
root /path/to/project/dist/;
}
}

Related

Kubernetes, Nginx, SPA hosted on a shared domain

I'm working on a project thats hosted on kubernetes, theres an ingress configured so that going to the domain root (lets call it example.com) directs to an nginx service hosting an spa thats already running. certain api paths /v1, /v2 etc point to different api services.
We are adding a second SPA that will be hosted under example.com/ui
I should probably mention this SPA is built using lit element (web components), but gets pre-processed(minified/transpiled/etc) and hosted via nginx rather than using polymer-serve or something similar. The processing is done via rollupjs and tailwind-cli(for the css). Also the site using routing, so other routes example.com/ui/someOtherRoute define user views in the SPA but would just return the index.html from the server.
I started with an nginx config I use for all of my other SPAs however it's set up to use the root domain.
server {
listen 80;
listen [::]:80 default ipv6only=on;
root /usr/share/nginx/html;
index index.html;
server_name _; # all hostnames
location / {
try_files $uri /index.html;
}
add_header Access-Control-Allow-Origin *;
}
So we have a situation where if the uri doesn't have /ui it wont get routed to my nginx server
When something does contain /ui I want to check for matching files otherwise serve the index.html
I don't have much experience with nginx but have been researching potential solutions.
currently I'm trying to leverage the alias feature as such:
location /ui/ {
alias /usr/share/nginx/html;
try_files $uri /index.html;
}
My understanding is that using the alias will strip the "/ui" from the search path so that a request of example.com/ui/somefile.css would look up files as if it was example.com/somefile.css
However I also found an old bug thats still open here
Stating that leveraging try_files $uri isn't effected by the alias. so the try_files command must still use the /ui in its file look up.
I also tried leveraging the html base tag in my index but that started adding /ui/ui to all of my css and js script requests.
I may be way off on my understanding here but any guidance would be great. I have limited control over what can be changed in the kuberenetes ingress, but I have complete control over the ngix container and the SPA code base.
We have a fallback plan of adding a separate sub domain for the new ui app, which will work and we've done many times before. I'm just really stuck in my head on this because I'm sure it can be solved as is and I'd really like to figure it out.

Problems with deploying on AWS EC2

I tried to deploy for the first time a MERN app on the AWS EC2.
I completed this article's steps, except for installing the mongodb, because I used Ubuntu 20.04 instead of 16.04, I used the docs and it had worked.
In the end I opened the app.js from /server/dist with pm2 and I had got the status of working, but when I wanted to access the website, it said that the connection was refused.
This is my github repository.
If you have time and experience with AWS EC2, please check it! I didn't find anything, why it wouldn't work.
UPDATE:
I tried to curl the localhost, and it returns the html page, so it works.
I checked the status of the nginx and it is running, but I think nginx is the problem.
I tried to set it up using this article. And when I test the nginx I got an ok, but it still doesn't work.
This is what /etc/nginx/sites-enabled/amazon-clone contains:
server {
listen 80;
listen [::]:80;
root /var/www/your_domain/html;
index index.html index.htm index.nginx-debian.html; server_name;
location / {
try_files $uri $uri/ =404;
proxy_pass http://172.31.39.190:5000;
}
}

Deploying react js and node js full stack on AWS production?

I have currently deployed the React and Node.js on nginx which sits on AWS . I have no issues in deployment and no errors.
The current environment is: PRODUCTION.
But I have a doubt whether the method I follow is right or wrong. This is the method I followed, https://jasonwatmore.com/post/2019/11/18/react-nodejs-on-aws-how-to-deploy-a-mern-stack-app-to-amazon-ec2
The following is my nginx configuration
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
root /var/apps/front_end/build;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://0.0.0.0:3005/;
}
As shown above , I have copied the build folder after npm run build to the AWS instance and gave the location to nginx and the backend is copied as it is to the AWS instance and I gave npm start it runs on 3005 port , I gave that IP to /api location to proxy pass
I see a couple of others using server.js for the front end and putting the build folder files there and setting up the nginx to that server.js .
So should I do it that way ? or am I good with the current method as seen in the link above ?
Just like everything else, there are multiple ways to go about this. Depending on the way you have ended the question looks like you are open to exploring them.
Here are my preferences depending on the increasing order of responsibilities on my side vs what AWS handles for me:
AWS Amplify :
Given that you are already using React and Node, this will be a relatively easy switch. Amplify is a not only a set of very useful frontend framework by makeing it easy to add functionalities like Authentication, Social Logins, Rotating API keys (via Cognito and API Gateway) etc but also backend logic that can be eventually deployed on AWS ApiGateway and AWS Lambda. Not only this but AMplify also provides a CICD pipeline and connects with Gothub.
In minutes, you can have a scalable service, with opetion to host frontend via AWS CloudFront, a global CDN service or via S3 hosting, deploy the API via ApiGateway and Lambda, have a CICD pipeline setup via AWS CodeDeploy and Code Build and also have user management via AWS Cognito. You can have multiple enviornments dev, test, beta etc and have it setup such that any push to the master branch is automatically deployed on the infra, and so on and so forth other branches being mapeed to specific enviornment. To top it all off, the same stack can be used to test and develop locally.
If you are rather tied down to use a specific service or function in a specific way, you can build up any of the combination of the above services. API Gateway for managing API, Cognito for user management, Lambda for compute capacity etc.
Rememebr, these are managed services so you offload a lot of engineering hours to AWS and being serverles means you are paying for what you use.
Comming to the example you have shared, you don't want your node process to be responsible of serving static assets - its a waste of the compute power as there is no intelligence attached to serving JS, CSS or images and also because in that case you introduce a new process in the loop. Instead have NGINX serve static assets itself. Refer this official guide or this StackOverflow answer.

Nginx ignoring /api location directive with react-router 4, except in firefox private browsing

I have a React app that is served via nginx, and a nodejs api server behind nginx reverse proxy. The nginx configuration looks like this:
location / {
try_files $uri /index.html;
add_header Cache-Control public;
}
location /api/ {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
}
In firefox private browsing, things work as expected: when I page refresh/redirect to domain.com/api, the request gets proxied to the node server.
However, in non-private firefox and chrome (incognito + not), any page refresh/redirect to domain.com/api will load the react app and treat the /api as a react-router route. The strange thing is, if I clear cookies/history and direct my browser to domain.com/api, I will correctly be proxied to the node server. The issue only occurs after I have loaded the react app once before.
This is driving me crazy, any ideas? I was thinking about downgrading react-router to version 3, but that would require some refactoring and I don't know if that would solve things.
I fixed it for now by removing registerServiceWorker() from create-react-app boilerplate.

Nginx caching angular js files

I have an angular site that im trying to publish to nginx, this is my nginx configuration. It works in routing to the pages:
server {
listen 80;
server_name test.domain.com;
root /home/user/angular/app/;
index index.html;
location / {
root /home/user/angular/app/;
try_files $uri $uri/ /index.html =404;
}
}
As you can see it doesn't have anything on cache, and according to nginx doc it shouldn't cache anything that way.
But when i make changes in my js files (html changes are displayed correctly) thay are not refreshed accordingly (i see things i deleted from them, wrong behavior, etc)
Any ideas?
Thank you
I'd try adding this block of code to my conf file to prevent caching of JS files.
location ~* \.(?:html|js)$ {
expires -1;
}

Categories

Resources