/bin/sh: sequelize: not found in docker AWS elastic beanstalk - javascript

I have an api in a docker container mounted in EBS. Suddenly after a new deploy I started getting this error in EBS even after rolling back to a stable deploy (it does work in local):
yarn run v1.22.19
backend | $ sequelize db:migrate
backend | /bin/sh: sequelize: not found
This is my DockerFile:
FROM node:16-alpine
WORKDIR /app
COPY /app/package.json .
COPY /app/yarn.lock .
RUN yarn install
COPY /app .
COPY ./docker/production/node/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
CMD [ "/start" ]
The start script:
#!/bin/sh
set -o errexit
set -o pipefail
set -o nounset
yarn db:migrate
yarn start
and the docker-compose.yml file:
version: '3.9'
volumes:
node_modules: {}
services:
nginx:
image: nginx
container_name: nginx
ports:
- "80:80"
volumes:
- ./docker/production/nginx:/etc/nginx/conf.d
depends_on:
- app
links:
- app
app:
build:
context: .
dockerfile: ./docker/production/node/Dockerfile
container_name: tralud_backend
restart: on-failure
volumes:
- ./app:/app
- node_modules:/app/node_modules
env_file:
- .env
ports:
- "3000:3000"
I tried running the same code version locally and it does work, it seems that the build it's being done properly.
I tried creating an new EBS env with the the old stable deploy and it does work, but then I start getting the same error once I try to install new dependencies (cors module).

Related

Customize run command for heroku container:release

On Heroku I'm trying to release a pushed image as described here: https://stackoverflow.com/a/50788844/2771889
docker build --file Dockerfile.prod --tag registry.heroku.com/my-app/web .
heroku container:login
docker push registry.heroku.com/my-app/web
heroku container:release web
My heroku.yml looks like this (used when I deploy from GitHub, and works fine):
build:
docker:
web: Dockerfile.prod
release:
image: web
command:
- yarn typeorm migration:run && yarn console seed
run:
web: yarn start:prod
It seems that the run command is not taken into account. When running container:release the logs show:
Starting process with command node
Whereas during a release from GitHub I'd see the correct command:
Starting process with command /bin/sh -c yarn start:prod
The release command however is recognized and executed correctly.
How can I make sure container:release runs the container with the correct command?
I had to add a CMD to my Dockerfile:
# ...
CMD yarn start:prod
This doesn't break GitHub deployments with heroku.yml.

Docker error after CTRL+C: Cannot kill container

Please help me with the correct Dockerfile for the NextJS application.
I'm trying to use these example from official repository: https://github.com/vercel/next.js/blob/canary/examples/with-docker/Dockerfile.multistage
My Dockerfile:
# Stage 1: Building the code
FROM node:lts-alpine#sha256:5edad160011cc8cfb69d990e9ae1cb2681c0f280178241d58eba05b5bfc34047 AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
ENV NODE_ENV production
RUN npm run build
RUN npm ci --only=production
# Stage 2: And then copy over node_modules, etc from that stage to the smaller base image
FROM node:lts-slim#sha256:f07d995a6b0bb73e3bd8fa42ba328dd0481f4f0a4c0c39008c05d91602bba6f1 as production
USER node
ARG PORT
WORKDIR /app
# COPY package.json next.config.js .env* ./
COPY --chown=node:node --from=builder /app/public ./public
COPY --chown=node:node --from=builder /app/.next ./.next
COPY --chown=node:node --from=builder /app/node_modules ./node_modules
EXPOSE $PORT
CMD ["node_modules/.bin/next", "start"]
My docker-compose.yml:
version: "3"
services:
nextjs:
container_name: redcross-frontend
ports:
- 3000:3000
build:
context: ./
dockerfile: Dockerfile
volumes:
- ./:/usr/src/app
env_file:
- .env.production
After execution docker-compose up I press CTRL+C for stopping it.
As a result in half of all cases I've faced with the error:
enter image description here
I tried to use dumb-init by executing this command: CMD ["dumb-init", "node_modules/.bin/next", "start"].
But I see the same error in half of all cases.

How to deal with Exec format error in docker-compose

I tried to build server and db through docker
Here is my docker-compose.yml
version: '3'
services:
api-server:
build: ./api
links:
- 'db'
ports:
- '3000:3000'
volumes:
- ./api:/src
- ./src/node_modules
tty: true
container_name: api-server
db:
build:
context: .
dockerfile: ./db/Dockerfile
restart: always
hostname: db
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_USER: root
MYSQL_PASSWORD: test
MYSQL_DATABASE: db
volumes:
- './db:/config'
ports:
- 3306:3306
container_name: db
Here is my Dockerfile
FROM node:alpine
WORKDIR /src
COPY . .
RUN rm -rf /src/node_modules
RUN rm -rf /src/package-lock.json
RUN yarn install
CMD yarn start:dev
After set up servers,I tried to access. but following error occured
Error: Error loading shared library /src/node_modules/bcrypt/lib/binding/napi-v3/bcrypt_lib.node: Exec format error
I wonder where is the problem.
And How to fix it.
If someone has opinion,please let me know.
Thanks
For everybody looking for an solution like me, the solution for me was adding a .dockerignore file with the following content:
.dockerignore
node_modules
My Dockerfile looks like this:
FROM node:14-alpine as development
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
Adding the .dockerignore-file prevents the COPY . .-command from copying the node_modules-folder and fixes the issue with bcrypt not loading.

Run sequelize migrates in docker compose

I am trying to up two containers with node and postgres (i need to save the content of the database). Follow the code below:
Dockerfile:
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home
WORKDIR /home/node/api
COPY package.json /
USER node
RUN npm i
COPY --chown=node:node . .
EXPOSE 3000
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
docker-entrypoint.sh:
#!/bin/sh
npx sequelize-cli db:migrate
npm run dev
Well, here I don't know how to run the migrates, because in my computer I only run "sequelize db:migrate", I have a script in package.json too "migrate:dev": "NODE_ENV=development sequelize db:migrate"
for example.
And the docker-compose.yml:
version: '3'
services:
app-app:
container_name: app-app
build: '.'
volumes:
- .:/home/node/api
- /home/node/api/node_modules
depends_on:
- postgres-app
networks:
- app-connect
ports:
- '3000:3000'
postgres-app:
container_name: postgres-app
image: postgres:11
restart: unless-stopped
volumes:
- postgres-app-data:/data
environment:
POSTGRES_DB: ${DATABASE}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
networks:
- app-connect
volumes:
postgres-app-data:
networks:
app-connect:
driver: bridge
But the migration not run yet. I have an error when I try run the docker-entrypoint.sh:
chmod: /docker-entrypoint.sh: Operation not permitted
ERROR: Service 'app-app' failed to build: The command '/bin/sh -c chmod +x /docker-entrypoint.sh' returned a non-zero code: 1
If I remove all these items and only run npm start directly all works but when I try to access the app container to run the migrate show me the error:
docker exec -it 351 npm run migrate:dev
> app#1.0.0 migrate:dev /home/node/api
> NODE_ENV=development sequelize db:migrate
Sequelize CLI [Node: 12.14.0, CLI: 5.5.1, ORM: 5.21.3]
Loaded configuration file "src/config/database.js".
Using environment "development".
ERROR: getaddrinfo ENOTFOUND postgres
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! app#1.0.0 migrate:dev: `NODE_ENV=development sequelize db:migrate`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the app#1.0.0 migrate:dev script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/node/.npm/_logs/2020-01-09T12_43_32_377Z-debug.log
How I can do this? Thanks!
So as per the comments above, the problem was that the DB_HOST was set with the wrong value.
The DB_HOST should have the value of the service name in that case: postgres-app.

Livereload of ionic 2 with docker-compose instance does not work

In a project I have this Dockerfile:
FROM node:6.9.4
RUN npm install -g cordova#4.2.0 ionic#2.2.1
ENV DOCKER_CONTAINER_APP=/web-app
RUN mkdir -p $DOCKER_CONTAINER_APP
ADD . $DOCKER_CONTAINER_APP
WORKDIR $DOCKER_CONTAINER_APP
EXPOSE 8100 35729
RUN echo "ready to go!"
I am using docker-compose, and this is the docker-compose yml file I use in my project:
version: '2'
services:
web:
build:
context: .
environment:
- NODE_ENV=development
- DEBUG='true'
ports:
- 8100:8100
- 35729:35729
volumes:
- .:/web-app
- ./node_modules:/web-app/node_modules
command: sh -c 'npm install; ionic serve --all'
stdin_open: true
All works well, this is the output of a docker-compose run web command:
[10:53:11] ionic-app-scripts 1.0.0
[10:53:18] watch started ...
[10:53:18] build dev started ...
[10:53:18] clean started ...
[10:53:18] clean finished in 57 ms
[10:53:18] copy started ...
[10:53:18] transpile started ...
[10:53:36] transpile finished in 17.96 s
[10:53:36] webpack started ...
[10:53:37] copy finished in 19.39 s
[10:53:51] webpack finished in 15.10 s
[10:53:51] sass started ...
[10:53:56] sass finished in 4.90 s
[10:53:56] build dev finished in 38.18 s
[10:53:57] watch ready in 39.27 s
[10:53:57] dev server running: http://localhost:8100/
But the native ionic livereload does not work. How can I use the Livereload with this ionic docker image ?
When I had similar issue I'd noticed in browser failed attempts to contact port 53703. Here is screenshot:
Chrome developer tools window
Container I used at that moment had been created with command
docker run -i -t -d --name ionic-dev -v /home/timur/Work/:/Work/ \
-p 8100:8100 -p 35729:35729 ionic-dev
So I stopped and deleted it
docker stop ionic-dev
docker rm ionic-dev
And created another container with command (notice published port 53703)
docker run -i -t -d --name ionic-dev -v /home/timur/Work/:/Work/ \
-p 8100:8100 -p 35729:35729 -p 53703:53703 ionic-dev
After that livereload started to work for me.

Categories

Resources