Firebase Hosting can't find Express Endpoints - javascript

I have set up hosting on Firebase and configured my Node.js express app following the documentation provided by Google. Including the proper folder structure and command line instructions to init firebase and firebase-functions.
Folder Structure:
Project
-- functions
-- node_modules
-- index.js
-- package.json
-- public
-- index.html
-- 404.html
.firebaserc
firebase.json
I have added the express app to the firebase functions http request via the code below:
// Set up endpoint
app.get('/api', (req, res) => {
res.json({
message: 'Welcome to the Line Prophet API. Good luck.'
})
});
/**
* Firebase cloud functions
*/
exports.app = functions.https.onRequest(app);
My Firebase.json file is set up to direct all request to the app destination:
{
"hosting": {
"public": "public",
"rewrites": [
{
"source": "**",
"destination": "app"
}
]
}
}
Once I run firebase deploy from my parent directory everything goes through fine and it says the app is deployed:
However after that I navigate to https://line-prophet.web.app/api and I recieve a 404 page not found error.
I have tried to run this locally with firebase serve and I have the same issue. This was working briefly which is why I feel as though everything is set up correctly, however after deploying again it has broken for good. Any tips are appreciated. Thanks!
The latest deployments say there are only 4 files deployed to Firebase which seems very low. I checked the Functions tab and I do see that "app" is in there and the source code is correct.

Thanks to #LawrenceCherone changing the firebase.json file to:
"rewrites": [
{
"source": "**",
"function": "app"
}
]
solved my issue. I was using destination instead of function because of the online docs, but it makes more sense to direct all requests to the function you have set up to handle http request.

Related

typescript, javascript, angular, nginx, alpine, docker communications in the network via nginx i think i missed something. looking for a review

once i migrated to docker to have a virtual network to simulate an atual network (bridge type with dns which works . the fqdn is resolved correctly to referrring ip) the following errors appeared in the console.log AND no data is displayed on the frontend website.
ERROR Error: NG0901
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend:4000/crafts. (Reason: CORS request did not succeed). Status code: (null).
ERROR
Object { headers: {…}, status: 0, statusText: "Unknown Error", url: "http://backend:4000/crafts", ok: false, name: "HttpErrorResponse", message: "Http failure response for http://backend:4000/crafts: 0 Unknown Error", error: error }
thats the browser's (firefox) console.log
i think nginx is doing things with the headers and or the body is empty due to serversides configs with nginx
on local host everything worked out fine
so im on the config of gninx but so far without any success.. i read about similar problems but couldnt find a solution myself OR the answers read didnt work with my setup.
i tries to change the ip to 0.0.0.0 to make it accessable in the network
oh AND im using nodejs expressjs
app.listen(port,ip)
I use a Dockerfile and docker-compose.yml to make the images, i use a powershell script to compose the images
what i suspect to cause the problem is:
backend:
index.js is run anbd looks like that
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const express_1 = __importDefault(require("express"));
const Routes_1 = __importDefault(require("./Routes"));
const app = (0, express_1.default)();
app.use(function (req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "*");
res.header("Access-Control-Allow-Methods", "PUT,POST,GET,DELETE,OPTIONS");
next();
});
// middleswares
app.use(express_1.default.json());
app.use(express_1.default.urlencoded({ extended: false })); //changed to see wheater it would effect the package isssue- should allow
app.use(Routes_1.default);
app.listen(4000,'0.0.0.0'); // or fqdn 'frontend'
console.log('server on port', 4000);
this is generated from index.ts and a build command
the referrring dockerfile:
FROM node:alpine as builder
WORKDIR /app/
COPY . /app/
COPY package.json /app/
COPY package-lock.json /app/
RUN cd /app/
RUN npm install -g
RUN npm update express
RUN npm install pg
FROM nginx:alpine
COPY --from=builder ./app/dist ./usr/share/nginx/html/
EXPOSE 3999-6001
CMD ["nginx", "-g", "daemon off;"]
RUN apk add --update nodejs
RUN apk add --update npm
after the image runs i open the terminal and run in the usr/share/gninx/html directory :
npm i express
npm i pg
node index.js
then I install vim
and edit the nginx.config like that
vi /etc/nginx/nginx.conf
i add a server directory, make it listen to the fqdn 'frontend' or its referring IP and the port 4000
listen ip:port kind of syntax
i add error and access logs earlier on and it doesn't return problems besides sometimes it says that IP are not available. im lacking on the understanding on how to interpret that
the PostgreSQL is also running in a docker container by the default port 5432 and the fqdn database which is also properly resolvable
same as the backend's fqdn
there is so much more stuff that links the short pieces of code that i have.. feel free to request more if interested or if u think it'd be required to find out whats going wrong.
I learnt my lesson..
servers listen to their own IPs, or at their localhost.
so i had a misconception there. though thanks to the pple taking a look inside here.
also a nodejs expressjs server doesn't necessarily need nginx to run on.. node is enough.. for the purpose..
fixing these two things led functionality as designed :)
so this can be closed or used as reminder on these two things:
understanding the conceptional idea of how networks work
AND
understanding the tech-stack being used and how it works
else
/closed

how to call expressjs endpoint from static html file in vercel

i have a very simple app with 1 /api/index.js server and 1 index.html file at the root.
index.js has a route app.get("/api/mystuff", () => {...})
index.html calls pings this route from a <script> with:
const result = await fetch("/api/mystuff")
all of this works locally, but when deployed to Vercel i get hit with a 404 from my request. the endpoint it's hitting is https://myvercelapp.vercel.app/api/mystuff and i'm getting a The page could not be found NOT_FOUND error. I don't know how to get this working, can someone steer me in the right direction?
thanks!
Based on the used tags I'm understanding you are using express with node.js.
Without seeing any of your code, I am guessing either your vercel.json isn't setup correctly (the guide explains this) or your index.js isn't setup correctly.
Vercel Guide on using Express
In that case, Vercel has a great guide on Using Express.js with Vercel.
In vercel's guide they have a section addressing using Standalone Express.
In this guide they use an index.js inside of an api folder.
The index.js file:
const app = require('express')();
const { v4 } = require('uuid');
app.get('/api', (req, res) => {
const path = `/api/item/${v4()}`;
res.setHeader('Content-Type', 'text/html');
res.setHeader('Cache-Control', 's-max-age=1, stale-while-revalidate');
res.end(`Hello! Go to item: ${path}`);
});
app.get('/api/item/:slug', (req, res) => {
const { slug } = req.params;
res.end(`Item: ${slug}`);
});
module.exports = app;
and the vercel.json which is what makes the whole project work.
{
"rewrites": [{ "source": "/api/(.*)", "destination": "/api" }]
}
And finally, Adding a Public Directory which may explain more on how you can properly use Vercel with Express.
The Problem
The reason I assume that it's the vercel.json is the problem is due it working locally and not on Vercel. A good way to test this locally is using vercel dev.
Working Example
I've created a public example which may help you. Please check https://github.com/Crispy-Cream/vercel-with-express for the source code example
and the public website https://vercel-with-express.vercel.app/api

Vercel giving 404 error when using a custom URL

I am developing a code playground. Everything works in my computer,
It has a saving system. Encodes your code and puts it in the URL. When page loads, gets the code from the URL. It works perfectly fine. It uses Vite, vanilla JS, and I used the Vite setting on Vercel, but the saving system doesn't work. When you reload, instead of getting the code, It gives a 404 error message, as the URL isn't on the dist folder.
What can I do?
Complete code: https://github.com/L1ghtingBolt/codeebox/
Add vercel configuration to your root directory.
vercel.json
{
"rewrites": [{
"source": "/(.*)",
"destination": "/" }]
}

Firebase deploy error: Error: Task 36af5812590007491b4e76c59df3b5a9afb37586b8a3eab39bb16d14cacf1c64 failed: retries exhausted after 6 attempts [duplicate]

I am trying to setup a 2 brand new Firebase projects with Firestore, Functions, Storage, and Hosting for both Production and Development environments. I started with deleting the references to the old firebase project: both firebase.json & .firebaserc. I then ran $ firebase init to setup Hosting, Functions, Storage, and Firestore using the test Firebase project. I then setup Firebase aliases with $ firebase use --add to switch between the two projects within one React.js project. I npm built and am attempting to deploy with: $ firebase deploy --project:test, but the hosting keeps trying the same last file and fails with: Error: Task 5fcd... failed: retries exhausted after 6 attempts..
I've seen some Stackoverflow answers that relate to the servers being down temporarily, but I do not see any server problems on their side (https://status.firebase.google.com/) and this has persisted before. On another project I have worked on, I was trying to deploy to 2 hosting targets on the same Firebase project, but one was failing and the other was working fine, and I was getting this same error (I never found a solution to that other than not using multiple targets.)
What else can I test to get this working? Is it something inside my React.js code base? (I recently deployed to my past project) Maybe it has to do with my Firebase setup process? Or there is still a connection to the old Firebase project? I don't know what to look at next to fix this. Any direction would be great, thanks!
Ps: Something weird that might not be connected is that if I run just $ firebase deploy, it doesn't deploy the default test env defined in firebaserc, but the live test env?
.firebaserc:
{
"projects": {
"default": "test-app-tech",
"test": "test-app-tech",
"live": "live-app-tech"
}
}
firebase.json:
{
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"functions": {
"predeploy": "npm --prefix \"$RESOURCE_DIR\" run build"
},
"hosting": {
"public": "build",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
},
"storage": {
"rules": "storage.rules"
}
}
Full console log:
(not sure where a longer log file is than just the console)
=== Deploying to 'test-app-tech'...
i deploying storage, firestore, functions, hosting
Running command: npm --prefix "$RESOURCE_DIR" run build
> functions# build C:\Users\me\Documents\GitHub\react.app.tech\functions
> tsc
+ functions: Finished running predeploy script.
i firebase.storage: checking storage.rules for compilation errors...
+ firebase.storage: rules file storage.rules compiled successfully
i firestore: reading indexes from firestore.indexes.json...
i cloud.firestore: checking firestore.rules for compilation errors...
+ cloud.firestore: rules file firestore.rules compiled successfully
i functions: ensuring necessary APIs are enabled...
+ functions: all necessary APIs are enabled
i storage: latest version of storage.rules already up to date, skipping upload...
i firestore: uploading rules firestore.rules...
+ firestore: deployed indexes in firestore.indexes.json successfully
i functions: preparing functions directory for uploading...
i functions: packaged functions (80.39 KB) for uploading
+ functions: functions folder uploaded successfully
i hosting[test-app-tech]: beginning deploy...
i hosting[test-app-tech]: found 32 files in build
⠸ hosting: uploading new files [0/1] (0%)
Error: Task 5fcd5c559ded0c02b3ed7840ca3ee77e95b798730af98d9f18bc627ac898071e failed: retries exhausted after 6 attempts
Remove the content in .firebase folder and try to redeploy.
There are two reasons to happen this.
by deleting .firebase folder in your root folder will solve problem.
if this dosen't work,
your internet connection may slow and size of files in your project may be bigger. so try with fast internet connection.
with slow internet, if you try deploying again and again, you will see that number of files uploading decreases on console. that means your files are getting uploaded, but it taking too much time, and firebase is getting exhausted.
Explored a bit deeper by appending --debug suffix, which gave me: TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received type undefined. Explored this, and tried to fix, but didn't work. I deleted /node_modules/, package-lock.json, and /build/, reinstalled packages, deployed and it worked. Not sure what fixed it because I deleted those files before to no avail. I did a few other small, seemingly unrelated changed, which who knows might have been connected, but its working now!
UPDATE: I had to truly find the error with this since my production environment was having the same issue, and narrowed it down to the steps I took exploring the TypeError [ERR_INVALID_ARG_TYPE]. Following this post somewhat, I changed:
"hosting": {
"public": "build",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
},
to
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
},
then deployed and it went through, but nothing is shown on the live server, because it isn't looking at the build folder of course. So I changed it back to point to build instead of public, and deployed again and it worked. Weird solution, sending to Firebase team to see what really happened here.
In ANGULAR
What Douglas answered has been the solution, in angular.json you change the "hosting:" dist "" for "hosting:" public "".
Then you run the firebase deploy again and that will give you an error, but don't worry, go change what you modified in angular.json and then run firebase deploy again and voila! that worked for me
The problem for me was the connection dropping every few minutes. I was able to upload my files by repeating the upload process. It was not tenable to sit at the computer and repeat by hand, so in my root folder I added a bash script to retry on error in a loop.
Create file deploy_staging.sh or deploy_production.sh etc.:
#!/bin/bash
trap "exit" INT
firebase use my_project_name
until firebase deploy --only hosting:my_project; do
echo Transfer disrupted, retrying in 3 seconds...
sleep 3
done
(*the trap "exit" INT allows interruption of the loop by ctrl-c if needed)
In the directory of the file, on the terminal command line, run chmod +x my_file_name.sh to make the file executable.
Run the file with ./my_file_name.sh in the terminal. It will rerun firebase deploy:production until the files are uploaded.
enter image description here
Delete file firebase cache and firebase deploy again

redirect all routes to https in nuxt project hosted in heroku

I'm trying to create a middleware to redirect all my routes to https, I think I need a middleware so I've created a redirect.js file in the middleware folder of Nuxt and then I've added this to nuxt.config.js:
router: {
middleware: ["redirect"]
},
And here is my redirect.js file that gives me a server error:
export default function({ app }) {
if (process.env.NODE_ENV === "production") {
if (app.context.req.header("x-forwarded-proto") !== "https") {
app.context.res.redirect(`https://${app.context.req.header("host")}${app.context.req.url}`);
}
}
}
I found an easier way i've added the package redirect-ssl
npm i redirect-ssl
and then i've added this line in my nuxt.config.js :
serverMiddleware: ["redirect-ssl"],
You can configure this in heroku.
For Nuxt static mode:
Add the https://github.com/heroku/heroku-buildpack-static.git buildpack after the heroku/nodejs buildpack under buildpacks
Make sure the buildpacks are added in this order to an app.json in the root directory of your project
Add a static.json in the root directory of your project
Make sure root and https_only are set correctly
If you want Nuxt to resolve routes instead of Nginx (so that you don't get the Nginx 404 page when you go to an unknown route), set routes as well.
Example static.json:
{
"root": "dist/",
"routes": {
"/**": "index.html"
},
"https_only": true
}

Categories

Resources