MongoDB init-script not launching with docker-compose - javascript

I did setup a docker-compose file that connects my app to a mongoDB database. My problem is that the database seems to never be initialized at first. My script is not executed and even tho' I can send some requests to the container, I only get connection refused errors due to authentification.
I did follow exactly this thread and I don't know what I'm missing out! (the db folder is on the same level as my docker-compose.yml)
Looking for some help on this one, thanks!
edit: None of the logs I did put in the init script are showing in the console, that's how I went to the conclusion that the file is not executed at all.
Here is my docker-compose file:
services:
mongo:
image: mongo:latest
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
MONGO_INITDB_DATABASE: test
volumes:
- ./db:/docker-entrypoint-initdb.d
- ./db-data:/data/db
ports:
- 27017:27017
networks:
- api1
app:
restart: always
build:
context: .
environment:
DB_HOST: localhost
DB_PORT: 27017
DB_NAME: test
DB_USER: developer
DB_PASS: developer
PORT: 3000
ports:
- 3000:3000
networks:
- api1
depends_on:
- mongo
command: npm start
networks:
api1:
driver: bridge
Here is my init scipt:
/* eslint-disable no-undef */
try {
print("CREATING USER")
db.createUser(
{
user: "developer",
pwd: "developer",
roles: [{ role: "readWrite", db: "test" }]
}
);
} catch (error) {
print(`Failed to create developer db user:\n${error}`);
}
And my dockerfile:
FROM node:10 as builder
RUN mkdir /home/node/app
WORKDIR /home/node/app
# Install dependencies
COPY package.json yarn.lock ./
RUN yarn install && yarn cache clean
# Copy source scripts
COPY . .
RUN yarn build
FROM node:10-alpine
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
COPY --from=builder --chown=node:node /home/node/app .
USER node
EXPOSE 3000
CMD ["node", "./build/bundle.js"]

Related

My React app's proxy not working in docker container

When I working without docker(just run React and Django in 2 sep. terminals) all works fine, but when use docker-compose, proxy not working and I got this error:
Proxy error: Could not proxy request /doc from localhost:3000 to http://127.0.0.1:8000.
See https://nodejs.org/api/errors.html#errors_common_system_errors for more information (ECONNREFUSED).
The work is complicated by the fact that each time after changing package.json, you need to delete node_modules and package-lock.json, and then reinstall by npm install (because of the cache, proxy changes in package.json are not applied to the container). I have already tried specifying these proxy options:
"proxy": "http://localhost:8000/",
"proxy": "http://localhost:8000",
"proxy": "http://127.0.0.1:8000/",
"proxy": "http://0.0.0.0:8000/",
"proxy": "http://<my_ip>:8000/",
"proxy": "http://backend:8000/", - django image name
Nothing helps, the proxy only works when running without a container, so I conclude that the problem is in the docker settings.
I saw some solution with nginx image, it doesn't work for me, at the development stage I don't need nginx and accompanying million additional problems associated with it, there must be a way to solve the problem without nginx.
docker-compose.yml:
version: "3.8"
services:
backend:
build: ./monkey_site
container_name: backend
command: python manage.py runserver 127.0.0.1:8000
volumes:
- ./monkey_site:/usr/src/monkey_site
ports:
- "8000:8000"
environment:
- DEBUG=1
- DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- redis
networks:
- proj_network
frontend:
build: ./frontend
container_name: frontend
ports:
- "3000:3000"
command: npm start
volumes:
- ./frontend:/usr/src/frontend
- ./monkey_site/static:/usr/src/frontend/src/static
depends_on:
- backend
networks:
- proj_network
celery:
build: ./monkey_site
command: celery -A monkey_site worker --loglevel=INFO
volumes:
- ./monkey_site:/usr/src/monkey_site/
depends_on:
- backend
- redis
redis:
image: "redis:alpine"
networks:
proj_network:
React Dockerfile:
FROM node:18-alpine
WORKDIR /usr/src/frontend
COPY package.json .
RUN npm install
EXPOSE 3000
Django Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/monkey_site
COPY requirements.txt ./
RUN pip install -r requirements.txt
package.json:
{
"name": "frontend",
"version": "0.1.0",
"private": true,
"proxy": "http://127.0.0.1:8000",
"dependencies": {
...
In Django I have django-cors-headers and all settings like:
CORS_ALLOWED_ORIGINS = [
'http://localhost:3000',
'http://127.0.0.1:3000',
]
Does anyone have any ideas how to solve this problem?

trying to add docker image for frontend vite app

so I have working backend and db images within my container and I'm trying to now do the same with the frontend but I'm having a much more difficult time being able to view my app. My impression is that I had to copy over the dist folder created after running vite build. I created an image from my frontend dockerfile and updated my docker-compose file to include the frontend service but when I navigate to 3300 I get a 404. My server is running on 3300 and when I usually run vite it runs a dev server on 3000. I'm also new to using vite which has made this a little more confusing. I've tried messing with ports and which are exposed but have had no luck. had a much easier time containerizing the backend and my db. thanks so much for any help!
Dockerfile:
FROM node:16-alpine
RUN mkdir -p /user/src/app
WORKDIR /user/src/app
COPY ["package.json", "package-lock.json", "./"] /user/src/app/
RUN npm ci
COPY . .
EXPOSE 3300
CMD [ "npm", "run", "server:run" ]
Dockerfile-frontend:
FROM node:16-alpine
WORKDIR /user/src/app
COPY . .
RUN npm ci
RUN npm run app:build
COPY dist /user/src/app
EXPOSE 3300
CMD ["npm", "run", "app:dev"]
Docker-compose:
version: '3.9'
services:
#mongo db service
mongodb:
container_name: db_container
image: mongo:latest
env_file:
- ./.env
restart: always
ports:
- $MONGODB_DOCKER_PORT:$MONGODB_DOCKER_PORT
volumes:
- ./mongodb:/data/db
#node app service
app:
container_name: node_container
image: sherlogs_app
build: .
env_file:
- ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_LOCAL_PORT
volumes:
- .:/user/src/app
stdin_open: true
tty: true
depends_on:
- mongodb
# frontend container
frontend:
container_name: frontend_container
image: sherlogs/frontend-container
build: .
env_file:
- ./.env
ports:
- $FRONTEND_LOCAL_PORT:$FRONTEND_LOCAL_PORT
volumes:
- .:/user/src/app
depends_on:
- mongodb
- app
volumes:
mongodb: {}

Can't access Adonis from Docker Container

I use Docker to contain my Adonis app. The build was success but when I access the app, I got ERR_SOCKET_NOT_CONNECTED or ERR_CONNECTION_RESET.
My docker compose contains adonis and database. Previously, I use the setup similar with this for my expressjs app, and it has no problem.
The adonis .env is remain standard, modification.
This is my setup:
# docker-compose.yml
version: '3'
services:
adonis:
build: ./adonis
volumes:
- ./adonis/app:/usr/src/app
networks:
- backend
links:
- database
ports:
- "3333:3333"
database:
image: mysql:5.7
ports:
- 33060:3306
networks:
- backend
environment:
MYSQL_USER: "user"
MYSQL_PASSWORD: "root"
MYSQL_ROOT_PASSWORD: "root"
networks:
backend:
driver: bridge
# adonis/Dockerfile
FROM node:12-alpine
RUN npm i -g #adonisjs/cli
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./app/. .
RUN npm install
EXPOSE 3333
CMD ["adonis", "serve", "--dev"]
I couldn't spot anything wrong with my setup.
The serve command starts the HTTP server on the port defined inside the .env file in the project root.
You should have something like this(note that HOST has to be set to 0.0.0.0 instead of localhost to accept connections from the outside):
HOST=0.0.0.0
PORT=3333
APP_URL=http://${HOST}:${PORT}

Redis connection failed

I'm building an application which uses Node, redis and mongo. I finished the development, and I want to containerize it with docker.
Here's my Dockerfile:
FROM node:13.8.0-alpine3.11
RUN npm install -g pm2
WORKDIR /user/src/app
COPY package*.json ./
RUN npm install --production
COPY . .
EXPOSE 3000
And here my docker-compose.yml
version: '3'
services:
redis-server:
container_name: scrapr-redis
image: 'redis:6.0-rc1-alpine'
ports:
- '6379:6379'
mongo-db:
container_name: scrapr-mongo
image: mongo
ports:
- '27017:27017'
command: --auth
environment:
- MONGO_INITDB_ROOT_USERNAME=user
- MONGO_INITDB_ROOT_PASSWORD=pass
- MONGO_INITDB_DATABASE=db
app:
container_name: scrapr-node
restart: always
build: .
ports:
- '3000:3000'
- '3001:3001'
links:
- mongo-db
depends_on:
- redis-server
environment:
- DB_USER=user
- DB_PWD=pass
- DB_NAME=db
- REDIS_HOST=redis-server
command: 'node index.mjs'
I can start the service successfully, but when node starts, it generates the following error:
Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
When I do docker ps -a, I can see that all containers are running:
Why can't it connect with redis? What did I miss?
127.0.0.1 does not look right to me at all. Let me get the quick checks out of the way first, are you sure you are using the REDIS_HOST env variable in the node app correctly? I would add some console logging to your node app to echo out the env variables to check what they are.
Secondly try attach to the running scrapr-node with docker container exec -it scrapr-node sh or /bin/bash if sh does not work.
Then run nslookup scrapr-redis from the shell, this will give you the ip address of the redis container. if ping scraper-redis returns then you know its an issue with your node app not the docker network.
you can also exec into the redis node and run hostname -I which should show the same ip address as you saw from the other container.
This should help you to debug the issue.
EDIT:
Ensure that you are correctly getting the value from your environment into your node app using process.env.REDIS_HOST and then correctly using that value when connecting to redis something like:
const redisClient = redis.createClient({
host: process.env.REDIS_HOST,
port: 6379
});
I would not try and force 127.0.0.1 on the docker network (if that is even possible) it is reserved as the loopback address.

Install mysql2 package manually - error

I'm trying to create an instance of Sequelize in my app. When I use docker-compose to build and run the application it asks me to manually download mysql2 and even though I've tried to download it using --save and --g it wont work. Why is this error occurring and how can I fix it?
const sequelize = new Sequelize('test', 'root', 'root', {
host: database,
port: 3307,
dialect: 'mysql'
});
Using the following docker-compose.yml file:
version: '3'
services:
mongo:
image: mongo:3.6
web:
build: .
ports:
- "3000:3000"
environment:
- MONGODB_URI=mongodb://mongo:27017/test
links:
- mongo
depends_on:
- mongo
volumes:
- .:/starter
- /starter/node_modules
database:
image: mysql
environment:
MYSQL_DATABASE: "ticketgo"
MYSQL_ROOT_PASSWORD: "pass"
volumes:
- "./sql:/docker-entrypoint-initdb.d"
ports:
- "3307:3307"
adminer:
image: "adminer"
ports:
- "8080:8080"
links:
- "database"
I get this error:
Error: Please install mysql2 package manually
I solved this problem i installed mysql package
npm install -g mysql2
For those who have this issue, I solved mine by npm intall -g mysql2. It happens if you install sequelize-cli globally, it might be bugged.

Categories

Resources