i wanna use id, token for env setting for security
so in docker-compose.yml i put below code but
frontend:
build:
context: frontend
dockerfile: Dockerfile
stdin_open: true
volumes:
- './frontend:/app:cached'
- './frontend/node_modules:/app/node_modules:cached'
environment:
- AUTH_TOKEN=token
- ACCOUNT_SID=account
- NODE_ENV=development
I set AUTH_TOKEN, ACCOUNT_SID in docker-compose.yml
but when i console.log(process.env.AUTH_TOKEN), it is 'undefined' in my react-app
how can i properly set this env?
thank you
Related
When I working without docker(just run React and Django in 2 sep. terminals) all works fine, but when use docker-compose, proxy not working and I got this error:
Proxy error: Could not proxy request /doc from localhost:3000 to http://127.0.0.1:8000.
See https://nodejs.org/api/errors.html#errors_common_system_errors for more information (ECONNREFUSED).
The work is complicated by the fact that each time after changing package.json, you need to delete node_modules and package-lock.json, and then reinstall by npm install (because of the cache, proxy changes in package.json are not applied to the container). I have already tried specifying these proxy options:
"proxy": "http://localhost:8000/",
"proxy": "http://localhost:8000",
"proxy": "http://127.0.0.1:8000/",
"proxy": "http://0.0.0.0:8000/",
"proxy": "http://<my_ip>:8000/",
"proxy": "http://backend:8000/", - django image name
Nothing helps, the proxy only works when running without a container, so I conclude that the problem is in the docker settings.
I saw some solution with nginx image, it doesn't work for me, at the development stage I don't need nginx and accompanying million additional problems associated with it, there must be a way to solve the problem without nginx.
docker-compose.yml:
version: "3.8"
services:
backend:
build: ./monkey_site
container_name: backend
command: python manage.py runserver 127.0.0.1:8000
volumes:
- ./monkey_site:/usr/src/monkey_site
ports:
- "8000:8000"
environment:
- DEBUG=1
- DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- redis
networks:
- proj_network
frontend:
build: ./frontend
container_name: frontend
ports:
- "3000:3000"
command: npm start
volumes:
- ./frontend:/usr/src/frontend
- ./monkey_site/static:/usr/src/frontend/src/static
depends_on:
- backend
networks:
- proj_network
celery:
build: ./monkey_site
command: celery -A monkey_site worker --loglevel=INFO
volumes:
- ./monkey_site:/usr/src/monkey_site/
depends_on:
- backend
- redis
redis:
image: "redis:alpine"
networks:
proj_network:
React Dockerfile:
FROM node:18-alpine
WORKDIR /usr/src/frontend
COPY package.json .
RUN npm install
EXPOSE 3000
Django Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/monkey_site
COPY requirements.txt ./
RUN pip install -r requirements.txt
package.json:
{
"name": "frontend",
"version": "0.1.0",
"private": true,
"proxy": "http://127.0.0.1:8000",
"dependencies": {
...
In Django I have django-cors-headers and all settings like:
CORS_ALLOWED_ORIGINS = [
'http://localhost:3000',
'http://127.0.0.1:3000',
]
Does anyone have any ideas how to solve this problem?
I'm working in a NX NEST+ANGULAR environment with a Docker container for NEST and Postgres. The CRUD operations work fine from the localhost to database container but I get the error if I send the request from the nest container to the database container. My docker-compose file configuration in root dir:
version: '3.8'
services:
nest-api:
container_name: nest-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
networks:
nestjs-crud:
Meanwhile the database URL in .env file:
DATABASE_URL="postgresql://admin:admin#postgres:5432/mydb?schema=public"
The Dockerfile:
FROM node:14
WORKDIR /workspace
COPY . .
COPY /prisma ./prisma/
RUN npm install
EXPOSE 3333
EXPOSE 9229
CMD [ "npm", "run", "start:migrate:dev" ]
and I've configured my package.json like this: "start:migrate:dev": "prisma migrate deploy && nx serve"
I still can't figure out what I'm missing and where I'm doing the mistake. Any help is appreciated.
You have to keep both containers in same network to allow them communicate. Update your docker-compose.yml file as below.
version: '3.8'
services:
nest-api:
container_name: nest-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
- nestjs-curd
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- nestjs-crud
volumes:
postgres:
networks:
nestjs-crud:
so I have working backend and db images within my container and I'm trying to now do the same with the frontend but I'm having a much more difficult time being able to view my app. My impression is that I had to copy over the dist folder created after running vite build. I created an image from my frontend dockerfile and updated my docker-compose file to include the frontend service but when I navigate to 3300 I get a 404. My server is running on 3300 and when I usually run vite it runs a dev server on 3000. I'm also new to using vite which has made this a little more confusing. I've tried messing with ports and which are exposed but have had no luck. had a much easier time containerizing the backend and my db. thanks so much for any help!
Dockerfile:
FROM node:16-alpine
RUN mkdir -p /user/src/app
WORKDIR /user/src/app
COPY ["package.json", "package-lock.json", "./"] /user/src/app/
RUN npm ci
COPY . .
EXPOSE 3300
CMD [ "npm", "run", "server:run" ]
Dockerfile-frontend:
FROM node:16-alpine
WORKDIR /user/src/app
COPY . .
RUN npm ci
RUN npm run app:build
COPY dist /user/src/app
EXPOSE 3300
CMD ["npm", "run", "app:dev"]
Docker-compose:
version: '3.9'
services:
#mongo db service
mongodb:
container_name: db_container
image: mongo:latest
env_file:
- ./.env
restart: always
ports:
- $MONGODB_DOCKER_PORT:$MONGODB_DOCKER_PORT
volumes:
- ./mongodb:/data/db
#node app service
app:
container_name: node_container
image: sherlogs_app
build: .
env_file:
- ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_LOCAL_PORT
volumes:
- .:/user/src/app
stdin_open: true
tty: true
depends_on:
- mongodb
# frontend container
frontend:
container_name: frontend_container
image: sherlogs/frontend-container
build: .
env_file:
- ./.env
ports:
- $FRONTEND_LOCAL_PORT:$FRONTEND_LOCAL_PORT
volumes:
- .:/user/src/app
depends_on:
- mongodb
- app
volumes:
mongodb: {}
I use Docker to contain my Adonis app. The build was success but when I access the app, I got ERR_SOCKET_NOT_CONNECTED or ERR_CONNECTION_RESET.
My docker compose contains adonis and database. Previously, I use the setup similar with this for my expressjs app, and it has no problem.
The adonis .env is remain standard, modification.
This is my setup:
# docker-compose.yml
version: '3'
services:
adonis:
build: ./adonis
volumes:
- ./adonis/app:/usr/src/app
networks:
- backend
links:
- database
ports:
- "3333:3333"
database:
image: mysql:5.7
ports:
- 33060:3306
networks:
- backend
environment:
MYSQL_USER: "user"
MYSQL_PASSWORD: "root"
MYSQL_ROOT_PASSWORD: "root"
networks:
backend:
driver: bridge
# adonis/Dockerfile
FROM node:12-alpine
RUN npm i -g #adonisjs/cli
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./app/. .
RUN npm install
EXPOSE 3333
CMD ["adonis", "serve", "--dev"]
I couldn't spot anything wrong with my setup.
The serve command starts the HTTP server on the port defined inside the .env file in the project root.
You should have something like this(note that HOST has to be set to 0.0.0.0 instead of localhost to accept connections from the outside):
HOST=0.0.0.0
PORT=3333
APP_URL=http://${HOST}:${PORT}
I did setup a docker-compose file that connects my app to a mongoDB database. My problem is that the database seems to never be initialized at first. My script is not executed and even tho' I can send some requests to the container, I only get connection refused errors due to authentification.
I did follow exactly this thread and I don't know what I'm missing out! (the db folder is on the same level as my docker-compose.yml)
Looking for some help on this one, thanks!
edit: None of the logs I did put in the init script are showing in the console, that's how I went to the conclusion that the file is not executed at all.
Here is my docker-compose file:
services:
mongo:
image: mongo:latest
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
MONGO_INITDB_DATABASE: test
volumes:
- ./db:/docker-entrypoint-initdb.d
- ./db-data:/data/db
ports:
- 27017:27017
networks:
- api1
app:
restart: always
build:
context: .
environment:
DB_HOST: localhost
DB_PORT: 27017
DB_NAME: test
DB_USER: developer
DB_PASS: developer
PORT: 3000
ports:
- 3000:3000
networks:
- api1
depends_on:
- mongo
command: npm start
networks:
api1:
driver: bridge
Here is my init scipt:
/* eslint-disable no-undef */
try {
print("CREATING USER")
db.createUser(
{
user: "developer",
pwd: "developer",
roles: [{ role: "readWrite", db: "test" }]
}
);
} catch (error) {
print(`Failed to create developer db user:\n${error}`);
}
And my dockerfile:
FROM node:10 as builder
RUN mkdir /home/node/app
WORKDIR /home/node/app
# Install dependencies
COPY package.json yarn.lock ./
RUN yarn install && yarn cache clean
# Copy source scripts
COPY . .
RUN yarn build
FROM node:10-alpine
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
COPY --from=builder --chown=node:node /home/node/app .
USER node
EXPOSE 3000
CMD ["node", "./build/bundle.js"]