I've got a nodejs server inside docker:
FROM node:12
WORKDIR /app
COPY package.json /app
RUN yarn install
COPY . /app
EXPOSE 8080
CMD [ "yarn", "start" ]
And then I've got a dockerfile that is used for converting 3D models:
FROM leon/usd:latest
WORKDIR /usr/src/ufg
# Configuration
ARG UFG_RELEASE="3bf441e0eb5b6cfbe487bbf1e2b42b7447c43d02"
ARG UFG_SRC="/usr/src/ufg"
ARG UFG_INSTALL="/usr/local/ufg"
ENV USD_DIR="/usr/local/usd"
ENV LD_LIBRARY_PATH="${USD_DIR}/lib:${UFG_SRC}/lib"
ENV PATH="${PATH}:${UFG_INSTALL}/bin"
ENV PYTHONPATH="${PYTHONPATH}:${UFG_INSTALL}/python"
# Build + install usd_from_gltf
RUN git init && \
git remote add origin https://github.com/google/usd_from_gltf.git && \
git fetch --depth 1 origin "${UFG_RELEASE}" && \
git checkout FETCH_HEAD && \
python "${UFG_SRC}/tools/ufginstall/ufginstall.py" -v "${UFG_INSTALL}" "${USD_DIR}" && \
cp -r "${UFG_SRC}/tools/ufgbatch" "${UFG_INSTALL}/python" && \
rm -rf "${UFG_SRC}" "${UFG_INSTALL}/build" "${UFG_INSTALL}/src"
RUN mkdir /usr/app
WORKDIR /usr/app
# Start the service
ENTRYPOINT ["usd_from_gltf"]
CMD ["usd_from_gltf"]
The image works like so: When run, the 3d model passed as an argument is converted and then the image stops running
I want to have my node.js server running all the time, and when there's a request for conversion, the second image converts the file. I don't really care whether the second image runs on request, or runs all the time.
How can I do this?
Related
I have a simple javascript application index.html, index.css, a folder called photos and index.js. I don't have any node modules or config file.
I am trying to dockerize by app by creating a dockerfile and docker-build.sh file. I have searched online, but i keep seeing dockerfiles with nodejs examples. Any guide on how I can dockerize a simple vanilla js app?
Here's what I have, but it currently gets stock at Attaching to display-ui
dockerfile
# pull a nginx image
FROM nginx:alpine
ARG UID=101
RUN apk update \
&& apk upgrade \
&& apk add bash \
&& apk add jq \
&& rm -rf /var/cache/apk/*
# Set working directory to nginx asset directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static assets
RUN rm -rf ./*
# Copy assets over so Nginx can properly serve
COPY apps/explorer.css .
COPY apps/explorer.js .
COPY apps/index.html .
RUN chown -R nginx:nginx /usr/share/nginx/html
# implement changes required to run NGINX as an unprivileged user
RUN sed -i 's,listen 80;,listen 8080;,' /etc/nginx/conf.d/default.conf \
&& sed -i '/user nginx;/d' /etc/nginx/nginx.conf \
&& sed -i 's,/var/run/nginx.pid,/tmp/nginx.pid,' /etc/nginx/nginx.conf \
&& sed -i "/^http {/a \ proxy_temp_path /tmp/proxy_temp;\n client_body_temp_path /tmp/client_temp;\n fastcgi_temp_path /tmp/fastcgi_temp;\n uwsgi_temp_path /tmp/uwsgi_temp;\n scgi_temp_path /tmp/scgi_temp;\n" /etc/nginx/nginx.conf \
# nginx user must own the cache and etc directory to write cache and tweak the nginx config
&& chown -R $UID:0 /var/cache/nginx \
&& chmod -R g+w /var/cache/nginx \
&& chown -R $UID:0 /etc/nginx \
&& chmod -R g+w /etc/nginx
EXPOSE 8080
USER nginx
# Containers run nginx with global directives and daemon off
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yml
services:
s3-ui:
container_name: display-ui
image: display-ui:latest
build:
context: .
dockerfile: Dockerfile
restart: always
stdin_open: true
tty: true
ports:
- "8080:8080"
environment:
APP_ADDR: ":8080"
MONITOR_ADDR: ":3090"
To run, I do:
docker-compose build
docker-compose up
What am I missing?
You can use the nginx image and copy your content to it. See "Hosting some simple static content" # https://hub.docker.com/_/nginx/:
a simple Dockerfile can be used to generate a new image that includes the necessary content...
FROM nginx
COPY static-html-directory /usr/share/nginx/html
Place this file in the same directory as your directory of content ("static-html-directory"), run docker build -t some-content-nginx ., then start your container:
$ docker run --name some-nginx -d some-content-nginx
I have a TypeScript function defined at src/index.ts for AWS Lambda below. For the purpose of this question, it simply sends a JSON body with a "hello world" statement. While running the docker container locally and making a file change, the lambda return value should change accordingly.
import { Context, APIGatewayProxyResult, APIGatewayEvent } from "aws-lambda";
export const handler = async (
event: APIGatewayEvent,
context: Context
): Promise<APIGatewayProxyResult> => {
return {
statusCode: 200,
headers: {
"content-type": "application/json",
},
body: JSON.stringify({ text: "hello world" })
};
};
My yarn develop command is equal to the following shell command:
esbuild src/index.ts --watch --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=build/index.js
Paired with yarn develop, I use the following Dockerfile.dev to set up my local development environment:
FROM public.ecr.aws/lambda/nodejs:16
RUN npm install --global yarn
WORKDIR /usr/app
RUN yarn install --frozen-lockfile
# where transpiled index.js is pointed.
VOLUME /var/task/build
CMD [ "build/index.handler" ]
I use the following Makefile with a develop target to run my lambda locally.
SHELL := /bin/bash
APPNAME = hello-world
.PHONY: develop
develop:
#docker build -f Dockerfile.dev -t $(APPNAME):dev .
#docker run -p 9000:8080 -v $(shell pwd)/build:/var/task/build $(APPNAME):dev & yarn develop
Lastly, this is how I test my lambda output locally:
cfaruki#hello-world[main] curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 102 100 100 100 2 161 3 --:--:-- --:--:-- --:--:-- 166
{
"statusCode": 200,
"headers": {
"content-type": "application/json"
},
"body": "{\"text\":\"hello world\"}"
}
With this context in mind, the issue I run into is what happens when I make a file change. If I change the response body to anything else, the file change is acknowledged in my terminal output. But the resultant output of my AWS lambda does not change.
docker run -p 9000:8080 -v /path/to/hello-world/build:/var/task/build hello-world:dev & yarn develop
yarn run v1.22.19
$ esbuild src/index.ts --watch --bundle --minify --sourcemap --platform=node --target=es2020 --outfile=build/index.js
01 Feb 2023 14:54:54,725 [INFO] (rapid) exec '/var/runtime/bootstrap' (cwd=/usr/app, handler=)
[watch] build finished, watching for changes...
01 Feb 2023 14:55:00,290 [INFO] (rapid) extensionsDisabledByLayer(/opt/disable-extensions-jwigqn8j) -> stat /opt/disable-extensions-jwigqn8j: no such file or directory
01 Feb 2023 14:55:00,290 [WARNING] (rapid) Cannot list external agents error=open /opt/extensions: no such file or directory
START RequestId: 5b738544-3713-4b41-86f8-7eca4355d230 Version: $LATEST
END RequestId: 5b738544-3713-4b41-86f8-7eca4355d230
REPORT RequestId: 5b738544-3713-4b41-86f8-7eca4355d230 Init Duration: 1.35 ms Duration: 1143.56 ms Billed Duration: 1144 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
[watch] build started (change: "src/index.ts") // <----- HERE IS THE CHANGE
[watch] build finished
I have validated that any change to my src/index.ts results in a change in build/index.js. I validate this by SSHing into the docker container and printing the file contents of build/index.js:
docker exec -it <CONTAINER-ID> /bin/bash
cat /var/task/build/index.js
It seems to me that the issue I am encountering is that the AWS Lambda docker image does not support live reloading of the function on file change. If that is the case, is there a way I can affect this behavior without adding another dependency to my development workflow?
Or is my only option to destroy and rebuild my docker image each time I make a file change? I am aware that docker-compose simplifies the process of rebuilding docker images. But I want to minimize the amount of configuration required by my project.
My college group and I are working on a software suite similar to IFTTT and/or Zapier, the suite is broken down into 3 parts, an application server that we choose to develop in JS, a mobile client in flutter, and a web client in php symfony.
To have a complete project we must deploy everything with docker (file and compose). At this moment we've managed to successfully build each components even with our docker-compose but the problem is that our application server and our webclient don't seem to communicate and we can't understand why.
PS we must respect the following:
The application server must run exposing the port 8080
The webclient service must run exposing the port 8081
Here's our code:
docker-compose.yml
version: "3"
services:
api:
build: "./API/"
restart: always
ports:
- "8080:8080"
networks:
- default
mobile:
build: "./MobileApp/"
volumes:
- apk:/Mobile/"
nginx:
image: nginx:1.19.0-alpine
restart: on-failure
volumes:
- './WebClient/public/:/usr/src/app'
- './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
ports:
- '8081:80'
depends_on:
- php
networks:
- default
php:
build:
context: .
dockerfile: docker/php/Dockerfile
restart: on-failure
env_file:
- ./WebClient/.env
user: 1000:1000
networks:
- default
volumes:
apk:
networks:
default:
driver: bridge
Application server (API) Dockerfile
FROM node:lts
WORKDIR /usr/app
COPY package.json .
RUN npm install --quiet
COPY . .
ENV PORT 8080
EXPOSE 8080
CMD ["node" , "index.js"]
Web client (php) Dockerfile
FROM composer:2.0 as composer
FROM php:7.4.1-fpm
RUN docker-php-ext-install pdo_mysql
RUN pecl install apcu
RUN apt-get update && \
apt-get install -y \
libzip-dev \
unzip
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install zip
RUN docker-php-ext-enable apcu
WORKDIR /usr/src/app
COPY --chown=1000:1000 WebClient /usr/src/app
RUN PATH=$PATH:/usr/src/app/vendor/bin:bin
RUN composer install
nginx default.conf
server {
server_name ~.*;
location / {
root /usr/src/app;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
client_max_body_size 50m;
fastcgi_pass php:9000;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/src/app/public/index.php;
}
error_log /dev/stderr debug;
access_log /dev/stdout;
}
Mobile app Dockerfile (mobile)
FROM cirrusci/flutter
COPY ./ /app
WORKDIR /app
##USER ROOT
RUN rm -f .packages
RUN flutter pub get
RUN flutter clean
RUN flutter build apk
RUN mkdir /Mobile/
RUN cp build/app/outputs/apk/release/app-release.apk /Mobile/client.apk
docker-compose build && docker-compose up output
Creating b-yep-500-lil-5-1-area-colinmartinage_api_1 ... done
Creating b-yep-500-lil-5-1-area-colinmartinage_php_1 ... done
Creating b-yep-500-lil-5-1-area-colinmartinage_mobile_1 ... done
Creating b-yep-500-lil-5-1-area-colinmartinage_nginx_1 ... done
Attaching to b-yep-500-lil-5-1-area-colinmartinage_mobile_1, b-yep-500-lil-5-1-area-colinmartinage_api_1, b-yep-500-lil-5-1-area-colinmartinage_php_1, b-yep-500-lil-5-1-area-colinmartinage_nginx_1
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
api_1 | server is listening on 8080
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
php_1 | [07-Mar-2021 09:50:10] NOTICE: [pool www] 'user' directive is ignored when FPM is not running as root
php_1 | [07-Mar-2021 09:50:10] NOTICE: [pool www] 'user' directive is ignored when FPM is not running as root
php_1 | [07-Mar-2021 09:50:10] NOTICE: [pool www] 'group' directive is ignored when FPM is not running as root
php_1 | [07-Mar-2021 09:50:10] NOTICE: [pool www] 'group' directive is ignored when FPM is not running as root
b-yep-500-lil-5-1-area-colinmartinage_mobile_1 exited with code 0
nginx_1 | 10-listen-on-ipv6-by-default.sh: Can not modify /etc/nginx/conf.d/default.conf (read-only file system?), exiting
php_1 | [07-Mar-2021 09:50:10] NOTICE: fpm is running, pid 1
php_1 | [07-Mar-2021 09:50:10] NOTICE: ready to handle connections
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
When i run my .sh script i get syntax error: unexpected end of file
#!/usr/bin/env bash
# Make sure the port is not already bound
if ss -lnt | grep -q :"$SERVER_PORT"; then
echo "Another process is already listening to port $SERVER_PORT"
exit 1;
fi
RETRY_INTERVAL=${RETRY_INTERVAL:-0.2}
if ! systemctl is-active --quiet elasticsearch.service; then
sudo systemctl start elasticsearch.service
# Wait until Elasticsearch is ready to respond
until curl --silent "$ELASTICSEARCH_HOSTNAME":"$ELASTICSEARCH_PORT" -w "" -o /dev/null; do
sleep "$RETRY_INTERVAL"
done
fi
# Run our API server as a background process
npm run serve &
until ss -lnt | grep -q :"$SERVER_PORT"; do
sleep "$RETRY_INTERVAL"
done
npx cucumber-js spec/cucumber/features --require-module #babel/register --require spec/cucumber/steps
kill -15 0
actual result when i run my script is syntax error: unexpected end of file
Context
I have always been running my Karma tests locally in PhantomJS, Google Chrome and Firefox without any problems. Currently, I'm looking to run the Karma tests in Docker and have been having problems with running the Karma tests in Firefox inside a Docker container although the Docker container can run the Karma tests in Google Chrome without any problems.
Problem
When I created a Docker container that contains Google Chrome, Firefox, JS libraries (node, npm, grunt ... etc), and VNC utilities (Xvfb, x11vnc). I started the VNC server and ran the tests. Firefox was started and the socket was created with a unique ID. When I entered a VNC session, I could see that firefox was started, the URL was loaded to the URL bar, and the Karma page was loaded. However, after about 2 seconds, the webpage would freeze and the Firefox is hanged. Therefore I could not see LOG: 'INFO[2015-10-16 20:19:15]: Router Started' message as well.
Interesting Find while Reproducing this Manually
I've tired commenting the lines that starts Firefox, so that will only start the karma server when I run the Karma tests. I then tried to run the tests with the following 2 methods -
Start a Bash session through docker exec -it <container_tag>, execute firefox, and typed the server url with the corresponding ID of the test run. Firefox didn't hang in this case and proceeded to start the test run.
Start a Bash session through docker exec -it <container_tag>, execute firefox <server_url_with_coresponding_id>. Firefox didn't hang in this case and proceeded to start the test run.
My DockerFile
FROM ubuntu:14.04
#========================
# Environment Variables for Configuration
#========================
ENV GEOMETRY 1920x1080x24
ENV DISPLAY :0
#========================
# Install Required Packages
#========================
RUN apt-get update -qq && apt-get install -qqy wget \
wget \
firefox \
xvfb \
x11vnc \
nodejs \
npm
#========================
# Install Google Chrome (Latest Stable Version)
#========================
RUN \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list && \
apt-get update -qq && \
apt-get install -qqy google-chrome-stable
#========================
# Clean up Apt
#========================
RUN \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
#========================
# Setup VNC Server
#========================
RUN \
mkdir -p ~/.vnc && \
x11vnc -storepasswd 1234 ~/.vnc/passwd
#========================
# Symlink NodeJS
#========================
RUN ln -s /usr/bin/nodejs /usr/bin/node
#========================
# Install Grunt and Grunt-CLI
#========================
RUN \
npm install -g grunt && \
npm install -g grunt-cli
#========================
# Setup Entry Point
#========================
COPY entry_point.sh /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/entry_point.sh
ENTRYPOINT ["/opt/bin/entry_point.sh"]
I believe that this is a problem relating to the karma-firefox-launcher or karma main library. If anyone can give me some pointers and ideas, that would be great!
I have already submitted PR to karma-firefox-launcher https://github.com/karma-runner/karma-firefox-launcher/pull/45.
This is just for others which might have fall into this.
Firefox has issue with having profile folder on VirtualBox shared folders see https://bugzilla.mozilla.org/show_bug.cgi?id=801274 which is used with Docker setup. Trick is to specify profile folder outside of shared folder like so:
in karma.conf.js:
browsers: [ 'FirefoxDocker' ],
browserNoActivityTimeout: 30000, // < might be necessary for slow machines
customLaunchers: {
FirefoxDocker: {
base: 'Firefox',
profile: '/tmp/firefox' // < location is up to you but make sure folder exists
}
},
Remember to update to version v0.1.7 to make this work.