Node.js Docker image environment variables - javascript

I have a Node.js application which is built into the docker image. In this application I have a config file with some API urls (API key, for example) which might change from time to time. Is it possible to launch the docker image with some additional parameter and then access this param from the node.js code (I assume this could be done through using environment vars) so as not to rebuild the image every time the value of this param should be changed. This is the pseudo code which I assume can be used:
docker run -p 8080:8080 paramApiKey="12345" mydockerimage
and then I'd like to access it from the node.js app:
var apiKey = process.env.paramApiKey
Can this somehow be achieved?

In order to define environment variables with docker at the time you use the run command, you have to use the -e flag and the format is should be "name=value", meaning that your ENV variable should be "paramApiKey=12345" so that you can access it by doing process.env.paramApiKey in your application.
That being said, your command would look like:
docker run -p 8080:8080 -e "paramApiKey=12345" mydockerimage

Sure, just try:
docker run -p 8080:8080 -e "paramApiKey=12345" mydockerimage

Related

Can't dockerize Nuxt.js application

I have simple Nuxt.js application and I want to dockerize it. Here is the script:
FROM node
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8010
CMD [ "npm", "start" ]
When I build it and run container it seems to work and I can see something like this:
Entrypoint app = server.js server.js.map
READY Server listening on http://127.0.0.1:8010
But when I'm trying to see it in browser I get just error - This page isn’t working.
So, in general, how can I dockerize my Nuxt.js application and make it work on my machine?
Your app binds to 127.0.0.1 which means that it'll only accept connections from inside the container. By reading the docs, it seems you can set the HOST environment variable to the binding address you want. Try this, which sets it to 0.0.0.0 which means that the app accepts connections from everywhere
FROM node
ENV HOST=0.0.0.0
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8010
CMD [ "npm", "start" ]
When running it, you should see READY Server listening on http://0.0.0.0:8010 rather than READY Server listening on http://127.0.0.1:8010

Writing a correct Dockerfile

I created an app using Javascript (with D3.js), JQuery, CSS, but no Node.js. It's your typical 'index.html' browser-run interface. I've been going through Docker tutorials and trying to figure out how to set my app up to a server, but I've had no luck accomplishing it and have only been able to find tutorials using apps with Node. I cannot, for the life of me, figure out what I'm doing wrong, but I'm wondering if the problem (or one of them) lies in my Dockerfile. Also, do I need to have used Node.js for all this to work? My app consists of the following:
A directory called Arena-Task. Within this directory, I have my index.html, my main javascript file called arena.js, and my CSS files. My other necessary files (like images, etc.) are located within two other folders in the same directory called data and scripts.
So now, how would I write my Dockerfile so that I can build it using Docker and publish it to a server? I've attempted following Docker's example Dockerfile:
FROM node:current-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
EXPOSE 8080
CMD [ "npm", "start" ]
COPY . .
But to be honest I'm not quite sure how to make the changes to accommodate my program. I can't figure out if a package.json is required because if it is, then don't I need to be using Node? I don't use any node modules or project dependencies like that in my app. Do I need to be? Is my problem more than just an incorrect Dockerfile?
Sorry that this question is all over the place, but I'm really new to the realm of the server-side. I appreciate any help! Let me know if I can provide any clarification.
lets clarify few things:
node and npm is when you need them, like you are using some npm packages.
package.json - is in use by npm - it stores installed package list in it.
For you case i don't see need of node. so you can create simple image and then you going to need simple web server - something which can serve you html/css/js files on web requests over http. the simplest i know would be nginx.
Also in Dockerfile you need to copy all you resources into image you are building.
that is what COPY package.json . was doing. but in you case you have to copy whole app folder into some app folder in docker image. (assuming app is a folder which holds all you files)
so we are going to have steps
Dockerfile should look something like this:
FROM ubuntu
RUN apt-get install -y nginx
COPY app app
COPY startup.sh /startup.sh
COPY ./nginx-default /etc/nginx/sites-available/default
no need in default commands because you going to start something else during docker run.
nginx-default is a configuration for nginx to work as webserver:
it should look something like this:
server {
listen 8080;
server_name localhost;
root /app
}
nginx is very flexible - if you need something from it google it.
docker image should do something all the time, otherwise image going to stop (some blocking process).
the easiest way i know is to create startup.sh file which going to start nginx as first step and then going to do infinity loop:
exit_script() {
trap - SIGINT SIGTERM # clear the trap
sudo service nginx stop
exit 1
}
sudo service nginx start
while sleep 20; do
CURRENT_TIME=$(date +"%T")
echo "app is running: $CURRENT_TIME"
done
exit_script - is a trap which helps to stop docker image in fast way, but not like terminate. but you can omit that for testing purposes.
finally, build image (docker build -t {your-image-name} .) and to start image use something like this:
docker run -p 8080:8080 {your-image-name} bash /startup.sh
that should work :), though most probably you going to face few errors because i was writing it from the head. (like you may need something else for nginx, or sudo is not installed by default in ubuntu lates image).

Deploy Javascript Cron Jobs and Queues on Amazon Elastic Beanstalkd

I have an Amazon Elastic Beanstalk application currently running my NODE.JS app.
I have created some Queues with kue.js and Crons with node-schedule.
Since I have many commands to run the queues and crons, I find it impossible to put it on my current nodejs app.
I am willing to open a new application, the only problem is that I can only run one command.
I really don't want to open a seperate ec2 (not connected to my Elastic Beanstalk service) to run all of those.
What can I do to fix it?
Thank you very much!
As you want to use EB(Elastic Beanstalk) you could write a docker file for the application and EB will already detect that and ask you if this is a docker based project and it will take care of the rest, you just need write all the scripts that you need to run before your Entry point Command CMDnpm start like below
Dockerfile
FROM node:10.13-alpine
# Sets the working directory,and creates the directory as well.
WORKDIR /app
# Install dependencies.
ADD package.json .
RUN npm install
# Copy your source code.
COPY . /app
#Run all your scripts here or simply put them to a scripts.js and run it
RUN node scripts.js
# start app
CMD ["npm", "start"]

How to resolve path in env variable in package.json?

In node I want to set START_DIR on process.env to process.cwd().
How to that within scripts package.json?
I can't use env file for example. this app not using env file loader and I can't change that.
for example:
"scripts": {
"start": "set SOMEDIR=process.cwd() && node app",
....
console.log('res', process.env.START_DIR);
Just to be clear, process.env represents the environment of the Node process at runtime, so whatever environment variables are visible to the Node process can be accessed in your modules as process.env.WHATEVER_VAR.
And why not just call process.cwd() from your app's code? It will return the path from which you execute the node command, or in this case npm start. It would be helpful to know more about what you're trying to accomplish, as I don't see why you would want to do what I think you're trying to do.
If you really want to do exactly what you described, you can use node -e "console.log('something')" To output something to the shell. Here's how it might look when you run npm start from a bash shell in the directory you want process.cwd() to return. (I'm not sure of the Windows equivalent):
"start": "export START_DIR=$(node -e \"console.log(process.cwd());\") && node app"
There are other options though. You could refer to the operating system's built-in variable representing the working directory. Looks like you may be using Windows, so that variable's name would be CD. I believe the full command would look something like this:
set SOMEDIR=%CD% && node app
Or, if you're starting the process from a bash shell (Linux and MacOS):
export SOMEDIR=$PWD && node app
You can also just access these variables directly in your scripts using process.env.CD or process.env.PWD.
The only danger with this method is that it assumes CD / PWD hasn't been manually set to some other value. In Windows, one way to circumvent this is to create a batch file wherever you're calling npm start from. In the file, execute the same command but replace %CD% with %~dp0, which refers to the path containing the file. Then set start to a Windows command to execute the file, something like call ./file.bat.
Similarly, in a bash environment create a shell script and use $(dirname $0) instead of $PWD. Make it executable with chmod +x name_of_file and set start to bash ./name_of_file.
One last thing: if the name of the variable doesn't matter, package.json can tell npm to create environment variables prefixed by npm_config_. More info in the npm config documentation.

How to execute a shell command before the ENTRYPOINT via the dockerfile

I have the following file for my nodejs project
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
# Replace with env variable
RUN envsubs < fil1 > file2
EXPOSE 8080
CMD [ "npm", "start" ]
I run the docker container with the -e flag providing the environment variable
But I do not see the replacement. Will the Run ccommand be excuted when the env variable is available?
Images are immutable
Dockerfile defines the build process for an image. Once built, the image is immutable (cannot be changed). Runtime variables are not something that would be baked into this immutable image. So Dockerfile is the wrong place to address this.
Using an entrypoint script
What you probably want to to do is override the default ENTRYPOINT with your own script, and have that script do something with environment variables. Since the entrypoint script would execute at runtime (when the container starts), this is the correct time to gather environment variables and do something with them.
First, you need to adjust your Dockerfile to know about an entrypoint script. While Dockerfile is not directly involved in handling the environment variable, it still needs to know about this script, because the script will be baked into your image.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["npm", "start"]
Now, write an entrypoint script which does whatever setup is needed before the command is run, and at the end, exec the command itself.
entrypoint.sh:
#!/bin/sh
# Where $ENVSUBS is whatever command you are looking to run
$ENVSUBS < file1 > file2
npm install
# This will exec the CMD from your Dockerfile, i.e. "npm start"
exec "$#"
Here, I have included npm install, since you asked about this in the comments. I will note that this will run npm install on every run. If that's appropriate, fine, but I wanted to point out it will run every time, which will add some latency to your startup time.
Now rebuild your image, so the entrypoint script is a part of it.
Using environment variables at runtime
The entrypoint script knows how to use the environment variable, but you still have to tell Docker to import the variable at runtime. You can use the -e flag to docker run to do so.
docker run -e "ENVSUBS=$ENVSUBS" <image_name>
Here, Docker is told to define an environment variable ENVSUBS, and the value it is assigned is the value of $ENVSUBS from the current shell environment.
How entrypoint scripts work
I'll elaborate a bit on this, because in the comments, it seemed you were a little foggy on how this fits together.
When Docker starts a container, it executes one (and only one) command inside the container. This command becomes PID 1, just like init or systemd on a typical Linux system. This process is responsible for running any other processes the container needs to have.
By default, the ENTRYPOINT is /bin/sh -c. You can override it in Dockerfile, or docker-compose.yml, or using the docker command.
When a container is started, Docker runs the entrypoint command, and passes the command (CMD) to it as an argument list. Earlier, we defined our own ENTRYPOINT as /entrypoint.sh. That means that in your case, this is what Docker will execute in the container when it starts:
/entrypoint.sh npm start
Because ["npm", "start"] was defined as the command, that is what gets passed as an argument list to the entrypoint script.
Because we defined an environment variable using the -e flag, this entrypoint script (and its children) will have access to that environment variable.
At the end of the entrypoint script, we run exec "$#". Because $# expands to the argument list passed to the script, this will run
exec npm start
And because exec runs its arguments as a command, replacing the current process with itself, when you are done, npm start becomes PID 1 in your container.
Why you can't use multiple CMDs
In the comments, you asked whether you can define multiple CMD entries to run multiple things.
You can only have one ENTRYPOINT and one CMD defined. These are not used at all during the build process. Unlike RUN and COPY, they are not executed during the build. They are added as metadata items to the image once it is built.
It is only later, when the image is run as a container, that these metadata fields are read, and used to start the container.
As mentioned earlier, the entrypoint is what is really run, and it is passed the CMD as an argument list. The reason they are separate is partly historical. In early versions of Docker, CMD was the only available option, and ENTRYPOINT was fixed as being /bin/sh -c. But due to situations like this one, Docker eventually allowed ENTRYPOINT to be defined by the user.
For images with bash as the default entrypoint, this is what I do to allow myself to run some scripts before shell start if needed:
FROM ubuntu
COPY init.sh /root/init.sh
RUN echo 'a=(${BEFORE_SHELL//:/ }); for c in ${a[#]}; do source $x; done' >> ~/.bashrc
and if you want to source a script at container login you pass its path in the environment variable BEFORE_SHELL. Example using docker-compose:
version: '3'
services:
shell:
build:
context: .
environment:
BEFORE_SHELL: '/root/init.sh'
Some remarks:
If BEFORE_SHELL is not set then nothing happens (we have the default behavior)
You can pass any script path available in the container, included mounted ones
The scripts are sourced so variables defined in the scripts will be available in the container
Multiple scripts can be passed (use a : to separate the paths)
Will the Run ccommand be excuted when the env variable is available?
Environnement variables set with -e flag are set when you run the container.
Problem is, Dockerfile is read on container build, so the RUN command will not be aware of thoses environnement variables.
The way to have environment variables set on build, is to add in your Dockerfile, ENV line. (https://docs.docker.com/engine/reference/builder/#/environment-replacement)
So your Dockerfile may be :
FROM node:latest
WORKDIR /src
ADD package.json .
ENV A YOLO
RUN echo "$A"
And the output :
$ docker build .
Sending build context to Docker daemon 2.56 kB
Step 1 : FROM node:latest
---> f5eca816b45d
Step 2 : WORKDIR /src
---> Using cache
---> 4ede3b23756d
Step 3 : ADD package.json .
---> Using cache
---> a4671a30bfe4
Step 4 : ENV A YOLO
---> Running in 7c325474af3c
---> eeefe2c8bc47
Removing intermediate container 7c325474af3c
Step 5 : RUN echo "$A"
---> Running in 35e0d85d8ce2
YOLO
---> 78d5df7d2322
You see at the before-last line when the RUN command launched, the container is aware the envrionment variable is set.
I had an extremely stubborn container that would not run anything on startup. This technique workd well, and took me a day to find as every single other possible technique failed.
Run docker inspect postgres to find entrypoint script. In this case, it was docker-entrypoint.sh. This might vary by container type and Docker version.
Open a shell into the container, then find the full path: find / -name docker-entrypoint.sh
Inspect the file: cat /usr/local/bin/docker-entrypoint.sh
In the Dockerfile, use SED to insert line 2 (using 2i).
# Insert into Dockerfile
RUN sed -i '2iecho Run on startup as user `whoami`.' /usr/local/bin/docker-entrypoint.sh
In my particular case, Docker ran this script twice on startup: first as root, then as user postgres. Can use the test to only run the command under root.

Categories

Resources