I have a simple javascript application index.html, index.css, a folder called photos and index.js. I don't have any node modules or config file.
I am trying to dockerize by app by creating a dockerfile and docker-build.sh file. I have searched online, but i keep seeing dockerfiles with nodejs examples. Any guide on how I can dockerize a simple vanilla js app?
Here's what I have, but it currently gets stock at Attaching to display-ui
dockerfile
# pull a nginx image
FROM nginx:alpine
ARG UID=101
RUN apk update \
&& apk upgrade \
&& apk add bash \
&& apk add jq \
&& rm -rf /var/cache/apk/*
# Set working directory to nginx asset directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static assets
RUN rm -rf ./*
# Copy assets over so Nginx can properly serve
COPY apps/explorer.css .
COPY apps/explorer.js .
COPY apps/index.html .
RUN chown -R nginx:nginx /usr/share/nginx/html
# implement changes required to run NGINX as an unprivileged user
RUN sed -i 's,listen 80;,listen 8080;,' /etc/nginx/conf.d/default.conf \
&& sed -i '/user nginx;/d' /etc/nginx/nginx.conf \
&& sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf \
&& sed -i "/^http {/a \ proxy_temp_path /tmp/proxy_temp;\n client_body_temp_path /tmp/client_temp;\n fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf \
# nginx user must own the cache and etc directory to write cache and tweak the nginx config
&& chown -R $UID:0 /var/cache/nginx \
&& chmod -R g+w /var/cache/nginx \
&& chown -R $UID:0 /etc/nginx \
&& chmod -R g+w /etc/nginx
EXPOSE 8080
USER nginx
# Containers run nginx with global directives and daemon off
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yml
services:
s3-ui:
container_name: display-ui
image: display-ui:latest
build:
context: .
dockerfile: Dockerfile
restart: always
stdin_open: true
tty: true
ports:
- "8080:8080"
environment:
APP_ADDR: ":8080"
MONITOR_ADDR: ":3090"
To run, I do:
docker-compose build
docker-compose up
What am I missing?
You can use the nginx image and copy your content to it. See "Hosting some simple static content" # https://hub.docker.com/_/nginx/:
a simple Dockerfile can be used to generate a new image that includes the necessary content...
FROM nginx
COPY static-html-directory /usr/share/nginx/html
Place this file in the same directory as your directory of content ("static-html-directory"), run docker build -t some-content-nginx ., then start your container:
$ docker run --name some-nginx -d some-content-nginx
Related
I dockerized a Vue app using nginx and the app is running well when started. The problem comes when I refresh the page, I am getting 404 error in an image attached. I tried to configure my nginx.conf file like solutions from How to use vue.js with Nginx? and still get the same error. I will show my current nginx.conf file and Dockerfile.
Error image:
nginx.conf file:
server {
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
Dockerfile:
# Step 1: Build Vue Project
FROM node:14.15.1 AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Step 2: Create Nginx Server
FROM nginx:1.20 AS prod-stage
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I had the same problem. I found the answer here https://stackoverflow.com/a/54193517/7923299. Add this to your Dockerfile
# Add nginx config
COPY [your_nginx_file.conf] /temp/prod.conf
RUN envsubst /app < /temp/prod.conf > /etc/nginx/conf.d/default.conf
I would suggest adapt your nginx.conf because it is best practice not to run port 80 inside a container.
I wrote a more detailed explanation here: How to config nginx for Vue-router on Docker
My college group and I are working on a software suite similar to IFTTT and/or Zapier, the suite is broken down into 3 parts, an application server that we choose to develop in JS, a mobile client in flutter, and a web client in php symfony.
To have a complete project we must deploy everything with docker (file and compose). At this moment we've managed to successfully build each components even with our docker-compose but the problem is that our application server and our webclient don't seem to communicate and we can't understand why.
PS we must respect the following:
The application server must run exposing the port 8080
The webclient service must run exposing the port 8081
Here's our code:
docker-compose.yml
version: "3"
services:
api:
build: "./API/"
restart: always
ports:
- "8080:8080"
networks:
- default
mobile:
build: "./MobileApp/"
volumes:
- apk:/Mobile/"
nginx:
image: nginx:1.19.0-alpine
restart: on-failure
volumes:
- './WebClient/public/:/usr/src/app'
- './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
ports:
- '8081:80'
depends_on:
- php
networks:
- default
php:
build:
context: .
dockerfile: docker/php/Dockerfile
restart: on-failure
env_file:
- ./WebClient/.env
user: 1000:1000
networks:
- default
volumes:
apk:
networks:
default:
driver: bridge
Application server (API) Dockerfile
FROM node:lts
WORKDIR /usr/app
COPY package.json .
RUN npm install --quiet
COPY . .
ENV PORT 8080
EXPOSE 8080
CMD ["node" , "index.js"]
Web client (php) Dockerfile
FROM composer:2.0 as composer
FROM php:7.4.1-fpm
RUN docker-php-ext-install pdo_mysql
RUN pecl install apcu
RUN apt-get update && \
apt-get install -y \
libzip-dev \
unzip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install zip
RUN docker-php-ext-enable apcu
WORKDIR /usr/src/app
COPY --chown=1000:1000 WebClient /usr/src/app
RUN PATH=$PATH:/usr/src/app/vendor/bin:bin
RUN composer install
nginx default.conf
server {
server_name ~.*;
location / {
root /usr/src/app;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
client_max_body_size 50m;
fastcgi_pass php:9000;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/src/app/public/index.php;
}
error_log /dev/stderr debug;
access_log /dev/stdout;
}
Mobile app Dockerfile (mobile)
FROM cirrusci/flutter
COPY ./ /app
WORKDIR /app
##USER ROOT
RUN rm -f .packages
RUN flutter pub get
RUN flutter clean
RUN flutter build apk
RUN mkdir /Mobile/
RUN cp build/app/outputs/apk/release/app-release.apk /Mobile/client.apk
docker-compose build && docker-compose up output
Creating b-yep-500-lil-5-1-area-colinmartinage_api_1 ... done
Creating b-yep-500-lil-5-1-area-colinmartinage_php_1 ... done
Creating b-yep-500-lil-5-1-area-colinmartinage_mobile_1 ... done
Creating b-yep-500-lil-5-1-area-colinmartinage_nginx_1 ... done
Attaching to b-yep-500-lil-5-1-area-colinmartinage_mobile_1, b-yep-500-lil-5-1-area-colinmartinage_api_1, b-yep-500-lil-5-1-area-colinmartinage_php_1, b-yep-500-lil-5-1-area-colinmartinage_nginx_1
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
api_1 | server is listening on 8080
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
php_1 | [07-Mar-2021 09:50:10] NOTICE: [pool www] 'user' directive is ignored when FPM is not running as root
php_1 | [07-Mar-2021 09:50:10] NOTICE: [pool www] 'user' directive is ignored when FPM is not running as root
php_1 | [07-Mar-2021 09:50:10] NOTICE: [pool www] 'group' directive is ignored when FPM is not running as root
php_1 | [07-Mar-2021 09:50:10] NOTICE: [pool www] 'group' directive is ignored when FPM is not running as root
b-yep-500-lil-5-1-area-colinmartinage_mobile_1 exited with code 0
nginx_1 | 10-listen-on-ipv6-by-default.sh: Can not modify /etc/nginx/conf.d/default.conf (read-only file system?), exiting
php_1 | [07-Mar-2021 09:50:10] NOTICE: fpm is running, pid 1
php_1 | [07-Mar-2021 09:50:10] NOTICE: ready to handle connections
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
I use Docker to contain my Adonis app. The build was success but when I access the app, I got ERR_SOCKET_NOT_CONNECTED or ERR_CONNECTION_RESET.
My docker compose contains adonis and database. Previously, I use the setup similar with this for my expressjs app, and it has no problem.
The adonis .env is remain standard, modification.
This is my setup:
# docker-compose.yml
version: '3'
services:
adonis:
build: ./adonis
volumes:
- ./adonis/app:/usr/src/app
networks:
- backend
links:
- database
ports:
- "3333:3333"
database:
image: mysql:5.7
ports:
- 33060:3306
networks:
- backend
environment:
MYSQL_USER: "user"
MYSQL_PASSWORD: "root"
MYSQL_ROOT_PASSWORD: "root"
networks:
backend:
driver: bridge
# adonis/Dockerfile
FROM node:12-alpine
RUN npm i -g #adonisjs/cli
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./app/. .
RUN npm install
EXPOSE 3333
CMD ["adonis", "serve", "--dev"]
I couldn't spot anything wrong with my setup.
The serve command starts the HTTP server on the port defined inside the .env file in the project root.
You should have something like this(note that HOST has to be set to 0.0.0.0 instead of localhost to accept connections from the outside):
HOST=0.0.0.0
PORT=3333
APP_URL=http://${HOST}:${PORT}
I've got a nodejs server inside docker:
FROM node:12
WORKDIR /app
COPY package.json /app
RUN yarn install
COPY . /app
EXPOSE 8080
CMD [ "yarn", "start" ]
And then I've got a dockerfile that is used for converting 3D models:
FROM leon/usd:latest
WORKDIR /usr/src/ufg
# Configuration
ARG UFG_RELEASE="3bf441e0eb5b6cfbe487bbf1e2b42b7447c43d02"
ARG UFG_SRC="/usr/src/ufg"
ARG UFG_INSTALL="/usr/local/ufg"
ENV USD_DIR="/usr/local/usd"
ENV LD_LIBRARY_PATH="${USD_DIR}/lib:${UFG_SRC}/lib"
ENV PATH="${PATH}:${UFG_INSTALL}/bin"
ENV PYTHONPATH="${PYTHONPATH}:${UFG_INSTALL}/python"
# Build + install usd_from_gltf
RUN git init && \
git remote add origin https://github.com/google/usd_from_gltf.git && \
git fetch --depth 1 origin "${UFG_RELEASE}" && \
git checkout FETCH_HEAD && \
python "${UFG_SRC}/tools/ufginstall/ufginstall.py" -v "${UFG_INSTALL}" "${USD_DIR}" && \
cp -r "${UFG_SRC}/tools/ufgbatch" "${UFG_INSTALL}/python" && \
rm -rf "${UFG_SRC}" "${UFG_INSTALL}/build" "${UFG_INSTALL}/src"
RUN mkdir /usr/app
WORKDIR /usr/app
# Start the service
ENTRYPOINT ["usd_from_gltf"]
CMD ["usd_from_gltf"]
The image works like so: When run, the 3d model passed as an argument is converted and then the image stops running
I want to have my node.js server running all the time, and when there's a request for conversion, the second image converts the file. I don't really care whether the second image runs on request, or runs all the time.
How can I do this?
Context
I have always been running my Karma tests locally in PhantomJS, Google Chrome and Firefox without any problems. Currently, I'm looking to run the Karma tests in Docker and have been having problems with running the Karma tests in Firefox inside a Docker container although the Docker container can run the Karma tests in Google Chrome without any problems.
Problem
When I created a Docker container that contains Google Chrome, Firefox, JS libraries (node, npm, grunt ... etc), and VNC utilities (Xvfb, x11vnc). I started the VNC server and ran the tests. Firefox was started and the socket was created with a unique ID. When I entered a VNC session, I could see that firefox was started, the URL was loaded to the URL bar, and the Karma page was loaded. However, after about 2 seconds, the webpage would freeze and the Firefox is hanged. Therefore I could not see LOG: 'INFO[2015-10-16 20:19:15]: Router Started' message as well.
Interesting Find while Reproducing this Manually
I've tired commenting the lines that starts Firefox, so that will only start the karma server when I run the Karma tests. I then tried to run the tests with the following 2 methods -
Start a Bash session through docker exec -it <container_tag>, execute firefox, and typed the server url with the corresponding ID of the test run. Firefox didn't hang in this case and proceeded to start the test run.
Start a Bash session through docker exec -it <container_tag>, execute firefox <server_url_with_coresponding_id>. Firefox didn't hang in this case and proceeded to start the test run.
My DockerFile
FROM ubuntu:14.04
#========================
# Environment Variables for Configuration
#========================
ENV GEOMETRY 1920x1080x24
ENV DISPLAY :0
#========================
# Install Required Packages
#========================
RUN apt-get update -qq && apt-get install -qqy wget \
wget \
firefox \
xvfb \
x11vnc \
nodejs \
npm
#========================
# Install Google Chrome (Latest Stable Version)
#========================
RUN \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list && \
apt-get update -qq && \
apt-get install -qqy google-chrome-stable
#========================
# Clean up Apt
#========================
RUN \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
#========================
# Setup VNC Server
#========================
RUN \
mkdir -p ~/.vnc && \
x11vnc -storepasswd 1234 ~/.vnc/passwd
#========================
# Symlink NodeJS
#========================
RUN ln -s /usr/bin/nodejs /usr/bin/node
#========================
# Install Grunt and Grunt-CLI
#========================
RUN \
npm install -g grunt && \
npm install -g grunt-cli
#========================
# Setup Entry Point
#========================
COPY entry_point.sh /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/entry_point.sh
ENTRYPOINT ["/opt/bin/entry_point.sh"]
I believe that this is a problem relating to the karma-firefox-launcher or karma main library. If anyone can give me some pointers and ideas, that would be great!
I have already submitted PR to karma-firefox-launcher https://github.com/karma-runner/karma-firefox-launcher/pull/45.
This is just for others which might have fall into this.
Firefox has issue with having profile folder on VirtualBox shared folders see https://bugzilla.mozilla.org/show_bug.cgi?id=801274 which is used with Docker setup. Trick is to specify profile folder outside of shared folder like so:
in karma.conf.js:
browsers: [ 'FirefoxDocker' ],
browserNoActivityTimeout: 30000, // < might be necessary for slow machines
customLaunchers: {
FirefoxDocker: {
base: 'Firefox',
profile: '/tmp/firefox' // < location is up to you but make sure folder exists
}
},
Remember to update to version v0.1.7 to make this work.