Kubernetes, Nginx, SPA hosted on a shared domain - javascript

I'm working on a project thats hosted on kubernetes, theres an ingress configured so that going to the domain root (lets call it example.com) directs to an nginx service hosting an spa thats already running. certain api paths /v1, /v2 etc point to different api services.
We are adding a second SPA that will be hosted under example.com/ui
I should probably mention this SPA is built using lit element (web components), but gets pre-processed(minified/transpiled/etc) and hosted via nginx rather than using polymer-serve or something similar. The processing is done via rollupjs and tailwind-cli(for the css). Also the site using routing, so other routes example.com/ui/someOtherRoute define user views in the SPA but would just return the index.html from the server.
I started with an nginx config I use for all of my other SPAs however it's set up to use the root domain.
server {
listen 80;
listen [::]:80 default ipv6only=on;
root /usr/share/nginx/html;
index index.html;
server_name _; # all hostnames
location / {
try_files $uri /index.html;
}
add_header Access-Control-Allow-Origin *;
}
So we have a situation where if the uri doesn't have /ui it wont get routed to my nginx server
When something does contain /ui I want to check for matching files otherwise serve the index.html
I don't have much experience with nginx but have been researching potential solutions.
currently I'm trying to leverage the alias feature as such:
location /ui/ {
alias /usr/share/nginx/html;
try_files $uri /index.html;
}
My understanding is that using the alias will strip the "/ui" from the search path so that a request of example.com/ui/somefile.css would look up files as if it was example.com/somefile.css
However I also found an old bug thats still open here
Stating that leveraging try_files $uri isn't effected by the alias. so the try_files command must still use the /ui in its file look up.
I also tried leveraging the html base tag in my index but that started adding /ui/ui to all of my css and js script requests.
I may be way off on my understanding here but any guidance would be great. I have limited control over what can be changed in the kuberenetes ingress, but I have complete control over the ngix container and the SPA code base.
We have a fallback plan of adding a separate sub domain for the new ui app, which will work and we've done many times before. I'm just really stuck in my head on this because I'm sure it can be solved as is and I'd really like to figure it out.

Related

AWS Load Balancer - 'text/plain' is not a valid JavaScript MIME type

I have my angular application running in AWS ECS (EC2 Instance) behind a load balancer. When i trigger the application using direct IP address of my EC2 instance the application loads fine without any issues. But when i trigger the application through the application load balancer, I see error on my browser console mentioning 'text/plain' is not a valid JavaScript MIME type. I am not sure why i am able to trigger the application without any issues while i trigger using the direct IP, but face this error only when i use the load balancer URL. Please find below the nginx configuration.
server {
include /etc/nginx/mime.types;
listen 443;
listen [::]:443;
server_name sampleweb.com www.sampleweb.com;
ssl_certificate /keys/cert.pem;
ssl_certificate_key /keys/key.pem;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
add_header 'Access-Control-Allow-Origin' '*';
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Can anyone help with this issue?
Hard to say specifically what is going wrong without a little more info, but my guess would be that the "Content-Type" header for the HTTP responses coming out of nginx are off somehow for your JS file, and the load balancer is setting Content-Type: text/plain as a placeholder / best guess.
I'd look at two things: first, the contents of your javascript file. Is it a normal text file? e.g. utf-8 encoded, no bom, no binary data. Second, I'd remove the location ~ \.js block from your nginx config: application/x-javascript is now considered deprecated in favour of application/javascript, and nginx should already have a mime.types include file that will set the type correctly for .js files (and most other common file types.)
Best of luck!
I was able to figure out the issue. I went through all my configuration again to see if i overlooked something. It had to do with a silly mistake i had introduced while configuring the listener for load balancer. While i configured the rules for routing based on path, i added configuration for default (when no path matches) at the end to return status code 200 and Content-Type as "text/plain". I now modified it to route it into my target group instead and it works fine.

API working in localhost but not working while uploading in server

I have created a website example.com which have 2 section called "service" & "ideas" are dynamic data. Which are coming from http://api.example.com/ourservice.json & http://api.example.com/ideas.json
in my project while I am running it in localhost it's fetching those data from the API, But when I uploaded the react build folder in server it's not fetching those data.
Is there anything I am missing? Is it the API's problem?
in nginx of the api.example.com's config file I have this config
server {
# Binds the TCP port 80.
listen 80;
# Root directory used to search for a file
root /var/www/html/example_static_api;
# Defines the file to use as index page
index index.html index.htm;
# Defines the domain or subdomain name.
# If no server_name is defined in a server block then
# Nginx uses the 'empty' name
server_name api.example.com;
location / {
# Return a 404 error for instances when the server receives
# requests for untraceable files and directories.
try_files $uri $uri/ =404;
add_header Access-Control-Allow-Origin *;
}
}
Is there anything I am missing!
The following link explains the reason for this problem and how to fix it. In short, I want to tell you that because the services you are calling do not have an ssl certificate and must be called https, you can easily see the error in the browser console
How to fix "insecure content was loaded over HTTPS, but requested an insecure resource"
After searching on StackOverflow I found the answer, there was 2 way to solve it. The first thing is adding SSL to the API URL.
and the second thing which was originally answered here by #buzatto.
"that's related to the fact that the api is served at http while your site is loaded https, so the browser blocks the request.
since you have no control over the fact that the 3rd party api, you can solve the issue adding the meta tag <meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests"> "

Nginx ignoring /api location directive with react-router 4, except in firefox private browsing

I have a React app that is served via nginx, and a nodejs api server behind nginx reverse proxy. The nginx configuration looks like this:
location / {
try_files $uri /index.html;
add_header Cache-Control public;
}
location /api/ {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
}
In firefox private browsing, things work as expected: when I page refresh/redirect to domain.com/api, the request gets proxied to the node server.
However, in non-private firefox and chrome (incognito + not), any page refresh/redirect to domain.com/api will load the react app and treat the /api as a react-router route. The strange thing is, if I clear cookies/history and direct my browser to domain.com/api, I will correctly be proxied to the node server. The issue only occurs after I have loaded the react app once before.
This is driving me crazy, any ideas? I was thinking about downgrading react-router to version 3, but that would require some refactoring and I don't know if that would solve things.
I fixed it for now by removing registerServiceWorker() from create-react-app boilerplate.

Nginx caching angular js files

I have an angular site that im trying to publish to nginx, this is my nginx configuration. It works in routing to the pages:
server {
listen 80;
server_name test.domain.com;
root /home/user/angular/app/;
index index.html;
location / {
root /home/user/angular/app/;
try_files $uri $uri/ /index.html =404;
}
}
As you can see it doesn't have anything on cache, and according to nginx doc it shouldn't cache anything that way.
But when i make changes in my js files (html changes are displayed correctly) thay are not refreshed accordingly (i see things i deleted from them, wrong behavior, etc)
Any ideas?
Thank you
I'd try adding this block of code to my conf file to prevent caching of JS files.
location ~* \.(?:html|js)$ {
expires -1;
}

Best ways to configure nginx to serve yeoman angular generator files?

I am quite new to configuring production servers to serve UI. I always wrote my code and someone else did the deployment. But I wish to move to the next stage now. Having said that, I wrote my UI code starting from yeoman-angular-generator and I wish to deploy it on my production Amazon ec2 instance.
I configured nginx on the instance and setup route53 and I am able to serve default 'Welcome to nginx' page from mydomain.com. What I wish to do is to serve my UI from mydomain.com. I tried to write a server block with location '/' pointing to my index.html from my dist folder. But it is not working.
I usually set up something like this:
server {
listen 80;
server_name your.project.com
location / {
try_files $uri /index.html;
root /path/to/project/dist/;
}
}

Categories

Resources