I'm trying to create an instance of Sequelize in my app. When I use docker-compose to build and run the application it asks me to manually download mysql2 and even though I've tried to download it using --save and --g it wont work. Why is this error occurring and how can I fix it?
const sequelize = new Sequelize('test', 'root', 'root', {
host: database,
port: 3307,
dialect: 'mysql'
});
Using the following docker-compose.yml file:
version: '3'
services:
mongo:
image: mongo:3.6
web:
build: .
ports:
- "3000:3000"
environment:
- MONGODB_URI=mongodb://mongo:27017/test
links:
- mongo
depends_on:
- mongo
volumes:
- .:/starter
- /starter/node_modules
database:
image: mysql
environment:
MYSQL_DATABASE: "ticketgo"
MYSQL_ROOT_PASSWORD: "pass"
volumes:
- "./sql:/docker-entrypoint-initdb.d"
ports:
- "3307:3307"
adminer:
image: "adminer"
ports:
- "8080:8080"
links:
- "database"
I get this error:
Error: Please install mysql2 package manually
I solved this problem i installed mysql package
npm install -g mysql2
For those who have this issue, I solved mine by npm intall -g mysql2. It happens if you install sequelize-cli globally, it might be bugged.
Related
I have issue with creating a record to the module room
it works when I tried to insert a new element in the user table but with the room it didn't work
here's an image for more infoenter image description here
root#70cfe0072344:/usr/src/api# yarn prisma migrate dev
yarn run v1.22.19
$ /usr/src/api/node_modules/.bin/prisma migrate dev
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "trandandan", schema "public" at "postgres:5432"
Already in sync, no schema change or pending migration was found.
✔ Generated Prisma Client (4.7.1 | library) to ./node_modules/#prisma/client in 99ms
Done in 3.36s.
root#70cfe0072344:/usr/src/api# yarn prisma generate
yarn run v1.22.19
$ /usr/src/api/node_modules/.bin/prisma generate
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
✔ Generated Prisma Client (4.7.1 | library) to ./node_modules/#prisma/client in 90ms
You can now start using Prisma Client in your code. Reference: https://pris.ly/d/client
```
import { PrismaClient } from '#prisma/client'
const prisma = new PrismaClient()
```
Done in 1.64s.
root#70cfe0072344:/usr/src/api#
I tried to run the command yarn prisma migrate dev and yarn prisma generate in both the container and host but it didn't work
Here is my docker-compose file for reference :
version: '3.7'
services:
api:
container_name: api
build:
context: ./api
target: development
volumes:
- ./api:/usr/src/api
- /usr/src/api/node_modules
ports:
- ${SERVER_PORT}:${SERVER_PORT}
- 5555:5555 # for prisma studio
- 9229:9229 # for debugging
command: yarn start:dev
networks:
- webnet
depends_on:
- postgres
postgres:
container_name: postgres
image: postgres:14
networks:
- webnet
environment:
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
PG_DATA: /var/lib/postgresql/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
networks:
webnet:
volumes:
pgdata:
so I have working backend and db images within my container and I'm trying to now do the same with the frontend but I'm having a much more difficult time being able to view my app. My impression is that I had to copy over the dist folder created after running vite build. I created an image from my frontend dockerfile and updated my docker-compose file to include the frontend service but when I navigate to 3300 I get a 404. My server is running on 3300 and when I usually run vite it runs a dev server on 3000. I'm also new to using vite which has made this a little more confusing. I've tried messing with ports and which are exposed but have had no luck. had a much easier time containerizing the backend and my db. thanks so much for any help!
Dockerfile:
FROM node:16-alpine
RUN mkdir -p /user/src/app
WORKDIR /user/src/app
COPY ["package.json", "package-lock.json", "./"] /user/src/app/
RUN npm ci
COPY . .
EXPOSE 3300
CMD [ "npm", "run", "server:run" ]
Dockerfile-frontend:
FROM node:16-alpine
WORKDIR /user/src/app
COPY . .
RUN npm ci
RUN npm run app:build
COPY dist /user/src/app
EXPOSE 3300
CMD ["npm", "run", "app:dev"]
Docker-compose:
version: '3.9'
services:
#mongo db service
mongodb:
container_name: db_container
image: mongo:latest
env_file:
- ./.env
restart: always
ports:
- $MONGODB_DOCKER_PORT:$MONGODB_DOCKER_PORT
volumes:
- ./mongodb:/data/db
#node app service
app:
container_name: node_container
image: sherlogs_app
build: .
env_file:
- ./.env
ports:
- $NODE_LOCAL_PORT:$NODE_LOCAL_PORT
volumes:
- .:/user/src/app
stdin_open: true
tty: true
depends_on:
- mongodb
# frontend container
frontend:
container_name: frontend_container
image: sherlogs/frontend-container
build: .
env_file:
- ./.env
ports:
- $FRONTEND_LOCAL_PORT:$FRONTEND_LOCAL_PORT
volumes:
- .:/user/src/app
depends_on:
- mongodb
- app
volumes:
mongodb: {}
I use Docker to contain my Adonis app. The build was success but when I access the app, I got ERR_SOCKET_NOT_CONNECTED or ERR_CONNECTION_RESET.
My docker compose contains adonis and database. Previously, I use the setup similar with this for my expressjs app, and it has no problem.
The adonis .env is remain standard, modification.
This is my setup:
# docker-compose.yml
version: '3'
services:
adonis:
build: ./adonis
volumes:
- ./adonis/app:/usr/src/app
networks:
- backend
links:
- database
ports:
- "3333:3333"
database:
image: mysql:5.7
ports:
- 33060:3306
networks:
- backend
environment:
MYSQL_USER: "user"
MYSQL_PASSWORD: "root"
MYSQL_ROOT_PASSWORD: "root"
networks:
backend:
driver: bridge
# adonis/Dockerfile
FROM node:12-alpine
RUN npm i -g #adonisjs/cli
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./app/. .
RUN npm install
EXPOSE 3333
CMD ["adonis", "serve", "--dev"]
I couldn't spot anything wrong with my setup.
The serve command starts the HTTP server on the port defined inside the .env file in the project root.
You should have something like this(note that HOST has to be set to 0.0.0.0 instead of localhost to accept connections from the outside):
HOST=0.0.0.0
PORT=3333
APP_URL=http://${HOST}:${PORT}
I did setup a docker-compose file that connects my app to a mongoDB database. My problem is that the database seems to never be initialized at first. My script is not executed and even tho' I can send some requests to the container, I only get connection refused errors due to authentification.
I did follow exactly this thread and I don't know what I'm missing out! (the db folder is on the same level as my docker-compose.yml)
Looking for some help on this one, thanks!
edit: None of the logs I did put in the init script are showing in the console, that's how I went to the conclusion that the file is not executed at all.
Here is my docker-compose file:
services:
mongo:
image: mongo:latest
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
MONGO_INITDB_DATABASE: test
volumes:
- ./db:/docker-entrypoint-initdb.d
- ./db-data:/data/db
ports:
- 27017:27017
networks:
- api1
app:
restart: always
build:
context: .
environment:
DB_HOST: localhost
DB_PORT: 27017
DB_NAME: test
DB_USER: developer
DB_PASS: developer
PORT: 3000
ports:
- 3000:3000
networks:
- api1
depends_on:
- mongo
command: npm start
networks:
api1:
driver: bridge
Here is my init scipt:
/* eslint-disable no-undef */
try {
print("CREATING USER")
db.createUser(
{
user: "developer",
pwd: "developer",
roles: [{ role: "readWrite", db: "test" }]
}
);
} catch (error) {
print(`Failed to create developer db user:\n${error}`);
}
And my dockerfile:
FROM node:10 as builder
RUN mkdir /home/node/app
WORKDIR /home/node/app
# Install dependencies
COPY package.json yarn.lock ./
RUN yarn install && yarn cache clean
# Copy source scripts
COPY . .
RUN yarn build
FROM node:10-alpine
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
COPY --from=builder --chown=node:node /home/node/app .
USER node
EXPOSE 3000
CMD ["node", "./build/bundle.js"]
I'm building an application which uses Node, redis and mongo. I finished the development, and I want to containerize it with docker.
Here's my Dockerfile:
FROM node:13.8.0-alpine3.11
RUN npm install -g pm2
WORKDIR /user/src/app
COPY package*.json ./
RUN npm install --production
COPY . .
EXPOSE 3000
And here my docker-compose.yml
version: '3'
services:
redis-server:
container_name: scrapr-redis
image: 'redis:6.0-rc1-alpine'
ports:
- '6379:6379'
mongo-db:
container_name: scrapr-mongo
image: mongo
ports:
- '27017:27017'
command: --auth
environment:
- MONGO_INITDB_ROOT_USERNAME=user
- MONGO_INITDB_ROOT_PASSWORD=pass
- MONGO_INITDB_DATABASE=db
app:
container_name: scrapr-node
restart: always
build: .
ports:
- '3000:3000'
- '3001:3001'
links:
- mongo-db
depends_on:
- redis-server
environment:
- DB_USER=user
- DB_PWD=pass
- DB_NAME=db
- REDIS_HOST=redis-server
command: 'node index.mjs'
I can start the service successfully, but when node starts, it generates the following error:
Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
When I do docker ps -a, I can see that all containers are running:
Why can't it connect with redis? What did I miss?
127.0.0.1 does not look right to me at all. Let me get the quick checks out of the way first, are you sure you are using the REDIS_HOST env variable in the node app correctly? I would add some console logging to your node app to echo out the env variables to check what they are.
Secondly try attach to the running scrapr-node with docker container exec -it scrapr-node sh or /bin/bash if sh does not work.
Then run nslookup scrapr-redis from the shell, this will give you the ip address of the redis container. if ping scraper-redis returns then you know its an issue with your node app not the docker network.
you can also exec into the redis node and run hostname -I which should show the same ip address as you saw from the other container.
This should help you to debug the issue.
EDIT:
Ensure that you are correctly getting the value from your environment into your node app using process.env.REDIS_HOST and then correctly using that value when connecting to redis something like:
const redisClient = redis.createClient({
host: process.env.REDIS_HOST,
port: 6379
});
I would not try and force 127.0.0.1 on the docker network (if that is even possible) it is reserved as the loopback address.