I'm working in a NX NEST+ANGULAR environment with a Docker container for NEST and Postgres. The CRUD operations work fine from the localhost to database container but I get the error if I send the request from the nest container to the database container. My docker-compose file configuration in root dir:
version: '3.8'
services:
nest-api:
container_name: nest-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
networks:
nestjs-crud:
Meanwhile the database URL in .env file:
DATABASE_URL="postgresql://admin:admin#postgres:5432/mydb?schema=public"
The Dockerfile:
FROM node:14
WORKDIR /workspace
COPY . .
COPY /prisma ./prisma/
RUN npm install
EXPOSE 3333
EXPOSE 9229
CMD [ "npm", "run", "start:migrate:dev" ]
and I've configured my package.json like this: "start:migrate:dev": "prisma migrate deploy && nx serve"
I still can't figure out what I'm missing and where I'm doing the mistake. Any help is appreciated.
You have to keep both containers in same network to allow them communicate. Update your docker-compose.yml file as below.
version: '3.8'
services:
nest-api:
container_name: nest-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
- nestjs-curd
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- nestjs-crud
volumes:
postgres:
networks:
nestjs-crud:
Related
When I working without docker(just run React and Django in 2 sep. terminals) all works fine, but when use docker-compose, proxy not working and I got this error:
Proxy error: Could not proxy request /doc from localhost:3000 to http://127.0.0.1:8000.
See https://nodejs.org/api/errors.html#errors_common_system_errors for more information (ECONNREFUSED).
The work is complicated by the fact that each time after changing package.json, you need to delete node_modules and package-lock.json, and then reinstall by npm install (because of the cache, proxy changes in package.json are not applied to the container). I have already tried specifying these proxy options:
"proxy": "http://localhost:8000/",
"proxy": "http://localhost:8000",
"proxy": "http://127.0.0.1:8000/",
"proxy": "http://0.0.0.0:8000/",
"proxy": "http://<my_ip>:8000/",
"proxy": "http://backend:8000/", - django image name
Nothing helps, the proxy only works when running without a container, so I conclude that the problem is in the docker settings.
I saw some solution with nginx image, it doesn't work for me, at the development stage I don't need nginx and accompanying million additional problems associated with it, there must be a way to solve the problem without nginx.
docker-compose.yml:
version: "3.8"
services:
backend:
build: ./monkey_site
container_name: backend
command: python manage.py runserver 127.0.0.1:8000
volumes:
- ./monkey_site:/usr/src/monkey_site
ports:
- "8000:8000"
environment:
- DEBUG=1
- DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- redis
networks:
- proj_network
frontend:
build: ./frontend
container_name: frontend
ports:
- "3000:3000"
command: npm start
volumes:
- ./frontend:/usr/src/frontend
- ./monkey_site/static:/usr/src/frontend/src/static
depends_on:
- backend
networks:
- proj_network
celery:
build: ./monkey_site
command: celery -A monkey_site worker --loglevel=INFO
volumes:
- ./monkey_site:/usr/src/monkey_site/
depends_on:
- backend
- redis
redis:
image: "redis:alpine"
networks:
proj_network:
React Dockerfile:
FROM node:18-alpine
WORKDIR /usr/src/frontend
COPY package.json .
RUN npm install
EXPOSE 3000
Django Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/monkey_site
COPY requirements.txt ./
RUN pip install -r requirements.txt
package.json:
{
"name": "frontend",
"version": "0.1.0",
"private": true,
"proxy": "http://127.0.0.1:8000",
"dependencies": {
...
In Django I have django-cors-headers and all settings like:
CORS_ALLOWED_ORIGINS = [
'http://localhost:3000',
'http://127.0.0.1:3000',
]
Does anyone have any ideas how to solve this problem?
I have issue with creating a record to the module room
it works when I tried to insert a new element in the user table but with the room it didn't work
here's an image for more infoenter image description here
root#70cfe0072344:/usr/src/api# yarn prisma migrate dev
yarn run v1.22.19
$ /usr/src/api/node_modules/.bin/prisma migrate dev
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "trandandan", schema "public" at "postgres:5432"
Already in sync, no schema change or pending migration was found.
✔ Generated Prisma Client (4.7.1 | library) to ./node_modules/#prisma/client in 99ms
Done in 3.36s.
root#70cfe0072344:/usr/src/api# yarn prisma generate
yarn run v1.22.19
$ /usr/src/api/node_modules/.bin/prisma generate
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
✔ Generated Prisma Client (4.7.1 | library) to ./node_modules/#prisma/client in 90ms
You can now start using Prisma Client in your code. Reference: https://pris.ly/d/client
```
import { PrismaClient } from '#prisma/client'
const prisma = new PrismaClient()
```
Done in 1.64s.
root#70cfe0072344:/usr/src/api#
I tried to run the command yarn prisma migrate dev and yarn prisma generate in both the container and host but it didn't work
Here is my docker-compose file for reference :
version: '3.7'
services:
api:
container_name: api
build:
context: ./api
target: development
volumes:
- ./api:/usr/src/api
- /usr/src/api/node_modules
ports:
- ${SERVER_PORT}:${SERVER_PORT}
- 5555:5555 # for prisma studio
- 9229:9229 # for debugging
command: yarn start:dev
networks:
- webnet
depends_on:
- postgres
postgres:
container_name: postgres
image: postgres:14
networks:
- webnet
environment:
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
PG_DATA: /var/lib/postgresql/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
networks:
webnet:
volumes:
pgdata:
so I have working backend and db images within my container and I'm trying to now do the same with the frontend but I'm having a much more difficult time being able to view my app. My impression is that I had to copy over the dist folder created after running vite build. I created an image from my frontend dockerfile and updated my docker-compose file to include the frontend service but when I navigate to 3300 I get a 404. My server is running on 3300 and when I usually run vite it runs a dev server on 3000. I'm also new to using vite which has made this a little more confusing. I've tried messing with ports and which are exposed but have had no luck. had a much easier time containerizing the backend and my db. thanks so much for any help!
Dockerfile:
FROM node:16-alpine
RUN mkdir -p /user/src/app
WORKDIR /user/src/app
COPY ["package.json", "package-lock.json", "./"] /user/src/app/
RUN npm ci
COPY . .
EXPOSE 3300
CMD [ "npm", "run", "server:run" ]
Dockerfile-frontend:
FROM node:16-alpine
WORKDIR /user/src/app
COPY . .
RUN npm ci
RUN npm run app:build
COPY dist /user/src/app
EXPOSE 3300
CMD ["npm", "run", "app:dev"]
Docker-compose:
version: '3.9'
services:
#mongo db service
mongodb:
container_name: db_container
image: mongo:latest
env_file:
- ./.env
restart: always
ports:
- $MONGODB_DOCKER_PORT:$MONGODB_DOCKER_PORT
volumes:
- ./mongodb:/data/db
#node app service
app:
container_name: node_container
image: sherlogs_app
build: .
env_file:
- ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_LOCAL_PORT
volumes:
- .:/user/src/app
stdin_open: true
tty: true
depends_on:
- mongodb
# frontend container
frontend:
container_name: frontend_container
image: sherlogs/frontend-container
build: .
env_file:
- ./.env
ports:
- $FRONTEND_LOCAL_PORT:$FRONTEND_LOCAL_PORT
volumes:
- .:/user/src/app
depends_on:
- mongodb
- app
volumes:
mongodb: {}
i wanna use id, token for env setting for security
so in docker-compose.yml i put below code but
frontend:
build:
context: frontend
dockerfile: Dockerfile
stdin_open: true
volumes:
- './frontend:/app:cached'
- './frontend/node_modules:/app/node_modules:cached'
environment:
- AUTH_TOKEN=token
- ACCOUNT_SID=account
- NODE_ENV=development
I set AUTH_TOKEN, ACCOUNT_SID in docker-compose.yml
but when i console.log(process.env.AUTH_TOKEN), it is 'undefined' in my react-app
how can i properly set this env?
thank you
I use Docker to contain my Adonis app. The build was success but when I access the app, I got ERR_SOCKET_NOT_CONNECTED or ERR_CONNECTION_RESET.
My docker compose contains adonis and database. Previously, I use the setup similar with this for my expressjs app, and it has no problem.
The adonis .env is remain standard, modification.
This is my setup:
# docker-compose.yml
version: '3'
services:
adonis:
build: ./adonis
volumes:
- ./adonis/app:/usr/src/app
networks:
- backend
links:
- database
ports:
- "3333:3333"
database:
image: mysql:5.7
ports:
- 33060:3306
networks:
- backend
environment:
MYSQL_USER: "user"
MYSQL_PASSWORD: "root"
MYSQL_ROOT_PASSWORD: "root"
networks:
backend:
driver: bridge
# adonis/Dockerfile
FROM node:12-alpine
RUN npm i -g #adonisjs/cli
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./app/. .
RUN npm install
EXPOSE 3333
CMD ["adonis", "serve", "--dev"]
I couldn't spot anything wrong with my setup.
The serve command starts the HTTP server on the port defined inside the .env file in the project root.
You should have something like this(note that HOST has to be set to 0.0.0.0 instead of localhost to accept connections from the outside):
HOST=0.0.0.0
PORT=3333
APP_URL=http://${HOST}:${PORT}