Combine two docker images into one - javascript

The title may be a bit weird and misleading, basically what I want to do is:
I need a node.js server that runs the following script from the dockerfile below. I don't want to run docker inside docker, so I need to combine the script and nodejs server, but don't know how, since I'm quite new to docker.
Should I add a configuration for the node js environment in the dockerfile below, or create a new dockerfile that depends on this one? And what should I do? Nevertheless, how do I do it?
FROM leon/usd:latest
WORKDIR /usr/src/ufg
# Configuration
ARG UFG_RELEASE="3bf441e0eb5b6cfbe487bbf1e2b42b7447c43d02"
ARG UFG_SRC="/usr/src/ufg"
ARG UFG_INSTALL="/usr/local/ufg"
ENV USD_DIR="/usr/local/usd"
ENV LD_LIBRARY_PATH="${USD_DIR}/lib:${UFG_SRC}/lib"
ENV PATH="${PATH}:${UFG_INSTALL}/bin"
ENV PYTHONPATH="${PYTHONPATH}:${UFG_INSTALL}/python"
# Build + install usd_from_gltf
RUN git init && \
git remote add origin https://github.com/google/usd_from_gltf.git && \
git fetch --depth 1 origin "${UFG_RELEASE}" && \
git checkout FETCH_HEAD && \
python "${UFG_SRC}/tools/ufginstall/ufginstall.py" -v "${UFG_INSTALL}" "${USD_DIR}" && \
cp -r "${UFG_SRC}/tools/ufgbatch" "${UFG_INSTALL}/python" && \
rm -rf "${UFG_SRC}" "${UFG_INSTALL}/build" "${UFG_INSTALL}/src"
RUN mkdir /usr/app
WORKDIR /usr/app
# Start the service
ENTRYPOINT ["usd_from_gltf"]
CMD ["usd_from_gltf"]

In Node's case, you can basically just splat in the instructions to download the Node.js tarball and unpack it in place. This is based on the official Node dockerfile which does some more authenticity checking, etc.
RUN \
cd /tmp \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v13.12.0/node-v13.12.0-linux-x64.tar.xz" \
&& tar -xJf "node-v13.12.0-linux-x64.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
&& rm "node-v13.12.0-linux-x64.tar.xz" \
&& ln -s /usr/local/bin/node /usr/local/bin/nodejs
If you also need the Yarn package manager, the instructions for adding it are in that same linked dockerfile.
Another option, since it looks like leon/usd is using a Debian/Ubuntu base image, is to just install Node.js using the image's Linux distribution's package manager, e.g. RUN apt-get update && apt-get install nodejs.

Related

Docker can't find index.html when dockerizing react app

I am pretty new to Docker and I want to dockerize my react app, the index.html file is under my public folder in the react project. When I run the docker image, it fails and gives me an error stating that the index.html file is missing.
The error:
flash#0.1.0 start
react-scripts start
Could not find a required file. Name: index.html Searched in:
/app/public npm notice npm notice New minor version of npm available!
8.11.0 -> 8.19.2 npm notice Changelog: https://github.com/npm/cli/releases/tag/v8.19.2 npm notice Run npm install -g npm#8.19.2 to update! npm notice
Below is the code of my Dockerfile:
FROM node:lts-alpine3.14 as build
RUN apk update && \
apk upgrade && \
apk add --no-cache bash git openssh
RUN mkdir /app
WORKDIR /app
COPY package.json .
RUN npm install -g --force npm#latest typescript#latest yarn#latest
RUN npm install
COPY . ./
RUN npm run build
# ---------------
FROM node:lts-alpine3.14
RUN mkdir -p /app/build
RUN apk update && \
apk upgrade && \
apk add git
WORKDIR /app
COPY --from=build /app/package.json .
RUN yarn install --production
COPY --from=build . .
EXPOSE 3000
EXPOSE 3001
ENV SERVER_PORT=3000
ENV API_PORT=3001
ENV NODE_ENV production
CMD ["npm", "start"]
Try to attach the shell to the container with
docker exec -it CONTAINER_NAME bash
and see where is index.html file and where you need to copy it

How can I run scripts from package.json in windows with linux commnads?

I'm trying to start the server on my local system with scripts in my package.json. It seems it has relative paths and commands like cp.
I have installed cygwin. I also tried to manually change those commands to windows commands. I used \ instead of / in paths.
"prestart": "cp -v ./src/index.html ./dist && node svg-processing.js && cp -v ./src/components/icons.css ./dist",
You can use linux commands directly with wsl in windows.

Minimal Docker image for doing some meteor unit tests

I'm doing my unit tests in a docker container (for my CI workflow)
.
Therefore I've build an image based on ubuntu with nodeJS (4.x) and meteorJS (1.5).
I have to add an ubuntu user, as root user makes some problems with meteor and
I have to set locale to fix the known problem with mongoDB.
In the result the image has 2 GB!!! which is unbelievable for me. It's way too much for just doing some unit tests.
I also tried to to use an alpine version (node:4.8-alpine), but with that I don't get meteor test running
My command to run the unit tests in my CI setting:
TEST_CLIENT=0 meteor test --once --driver-package dispatch:mocha --allow-superuser
And this is the Dockerfile I am using:
FROM ubuntu:16.04
COPY package.json ./
RUN apt-get update -y && \
apt-get install -yqq \
python \
build-essential \
apt-transport-https \
ca-certificates \
curl \
locales \
nodejs \
npm \
nodejs-legacy \
sudo \
git && \
rm -rf /var/lib/apt/lists/*
## NodeJS and MeteorJS
RUN curl -sL https://deb.nodesource.com/setup_4.x | bash -
RUN curl https://install.meteor.com/ | sh
## Dependencies
RUN npm install -g eslint eslint-plugin-react
RUN npm install -g standard
RUN npm install
## Locale
ENV OS_LOCALE="en_US.UTF-8"
RUN locale-gen ${OS_LOCALE}
ENV LANG=${OS_LOCALE} LANGUAGE=en_US:en LC_ALL=${OS_LOCALE}
## User
RUN useradd ubuntu && \
usermod -aG sudo ubuntu && \
mkdir -p /builds/project/testing/.meteor /home/ubuntu && \
chown -Rh ubuntu:ubuntu /builds/project/testing/.meteor && \
chown -Rh ubuntu:ubuntu /home/ubuntu
USER ubuntu
## Initialize meteor
RUN cd /builds/project/testing/ && meteor update --release 1.5
Maybe someone has an idea how to optimize this Dockerfile...
If you're not attached to your version of doing things, feel free to take a look at how I did it: https://hub.docker.com/r/simonsimcity/bitbucket-meteor/tags
Mine is about 300MB - and could be improved as well, I think.
I've based another docker image on it which you can use without changes in Bitbucket, where can do UI testing in Firefox and Chrome: https://hub.docker.com/r/simonsimcity/bitbucket-meteor-headless-browsers
Feel free to extend the idea as you want, it's licensed on MIT.

Docker Image layers and ENTRYPOINT sequence

I have the following dockerfile
FROM maven:3.3.3-jdk-8
#install node
RUN apt-get update
RUN apt-get -qq update
RUN apt-get install -y nodejs npm
# TODO could uninstall some build dependencies
RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10
# Install packages for envsubst
RUN apt-get update && apt-get upgrade -y --force-yes && rm -rf /var/lib/apt/lists/*;
RUN apt-get update
RUN apt-get install -y gettext-base
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# cache package.json and node_modules to speed up builds
ADD src src
ADD package.json package.json
ADD node_modules node_modules
ADD pom.xml pom.xml
ADD Gruntfile.js Gruntfile.js
Add gulpfile.js gulpfile.js
ADD settings.xml settings.xml
# Substitute dependencies from environment variables
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8000
Here is the entrypoint script
#!/bin/sh
envsubst < "/usr/src/app/src/js/envapp.js" > "/usr/src/app/src/js/app.js"
mvn clean install -DskipTests -s settings.xml
npm start
Note that the line envsubst < "/usr/src/app/src/js/envapp.js" > "/usr/src/app/src/js/app.js" reads the environment and updates the app.js file
I have confirmed that the file is updated by sshing into the container and opening the file
However when I open the app in the browser it appears that it is still reading the old value of the app.js file.
After a lot of debugging I feel that haveing the npm start command still reads an old layer in the docker file. Is there a way that the new changes are picked up in the `npm start line?
I also tried having npm start as an argument to ENTRYPOINT but still have the same effect. Any ideas on what might be wrong?
I also tried forking the image so I copy the files in one image and use this as a base image for the image that does the environment manipultion
But still have the same problem
Here is how I run the docker container
docker run -e "ENVVARIABLE=VALUE" -i <image-name>

How to run livereload with gulp within a docker container?

I created a docker container to run tasks with gulp.
All tasks are running, the problem is I can't enable livrereload in Chrome although I exposed the 35729 port in my container.
Here is the Dockerfile :
FROM ubuntu:latest
MAINTAINER jiboulex
EXPOSE 80 8080 3000 35729
RUN apt-get update
RUN apt-get install curl -y
RUN apt-get install software-properties-common -y
RUN add-apt-repository ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get install nodejs -y
RUN curl -L https://www.npmjs.com/install.sh | sh
RUN npm install --global gulp -y
# overwrite this with 'CMD []' in a dependent Dockerfile
CMD ["/bin/bash"]
I create the image with the following command :
docker build -t gulp_image .
I create a container :
docker run --name=gulp_container -i -t --rm -v /var/www/my_app:/var/www/my_app:rw gulp_image bash
then in my container
cd /var/www/my_app
gulp
Here is my Gulpfile.js
var gulp = require('gulp'),
livereload = require('gulp-livereload'),
exec = require('child_process').exec;
gulp.task('js', function() {
gulp.src([
'./src/js/*.js'
]).pipe(livereload());
});
gulp.task('watch', function(){
var onChange = function (event) {
console.log('File '+event.path+' has been '+event.type);
};
livereload.listen();
gulp.watch([
'./src/js/*.js'
], ['js'])
.on('change', onChange);
});
gulp.task('default', ['watch', 'js']);
When I edit a js file, I can see in my container that the files are processed but when I try to enable live reload in my browser (Chrome), I got the following message : "Could not connect to LiveReload server.."
Anyone got a clue about what I missed or didn't do ?
Thanks for reading !
Exposing ports in a container does not imply that the ports will be opened on the docker host. You should be using the docker run -p option. The documentation says:
-p=[] : Publish a container᾿s port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
Both hostPort and containerPort can be specified as a range of ports.
When specifying ranges for both, the number of container ports in the range must match the number > of host ports in the range. (e.g., -p 1234-1236:1234-1236/tcp)
(use 'docker port' to see the actual mapping)
Since you tried the -p containerPort form, the actual port opened on your host (Linux mint) was randomly chosen by docker when you run the docker run command. To figure out what port was chosen, you have to use the docker port command.
Since this is not convenient, you should use the -p hostPort:containerPort form, and specify that hostPort is 35729. (I also assume you expect ports 80, 8080 and 3000 to be accessible in the same manner)
The command to run your container would then be:
docker run --name=gulp_container -i -t --rm \
-v /var/www/my_app:/var/www/my_app:rw \
-p 35729:35729 \
-p 80:80 \
-p 8080:8080 \
-p 3000:3000 \
gulp_image bash
An easier way to deal with ports is to run your docker container in host networking mode. In this mode, any port opened on the container is in fact opened on the host network interface (they are actually both sharing the same interface).
You would then start your container with:
docker run --name=gulp_container -i -t --rm \
-v /var/www/my_app:/var/www/my_app:rw \
--net=host \
gulp_image bash

Categories

Resources