In node I want to set START_DIR on process.env to process.cwd().
How to that within scripts package.json?
I can't use env file for example. this app not using env file loader and I can't change that.
for example:
"scripts": {
"start": "set SOMEDIR=process.cwd() && node app",
....
console.log('res', process.env.START_DIR);
Just to be clear, process.env represents the environment of the Node process at runtime, so whatever environment variables are visible to the Node process can be accessed in your modules as process.env.WHATEVER_VAR.
And why not just call process.cwd() from your app's code? It will return the path from which you execute the node command, or in this case npm start. It would be helpful to know more about what you're trying to accomplish, as I don't see why you would want to do what I think you're trying to do.
If you really want to do exactly what you described, you can use node -e "console.log('something')" To output something to the shell. Here's how it might look when you run npm start from a bash shell in the directory you want process.cwd() to return. (I'm not sure of the Windows equivalent):
"start": "export START_DIR=$(node -e \"console.log(process.cwd());\") && node app"
There are other options though. You could refer to the operating system's built-in variable representing the working directory. Looks like you may be using Windows, so that variable's name would be CD. I believe the full command would look something like this:
set SOMEDIR=%CD% && node app
Or, if you're starting the process from a bash shell (Linux and MacOS):
export SOMEDIR=$PWD && node app
You can also just access these variables directly in your scripts using process.env.CD or process.env.PWD.
The only danger with this method is that it assumes CD / PWD hasn't been manually set to some other value. In Windows, one way to circumvent this is to create a batch file wherever you're calling npm start from. In the file, execute the same command but replace %CD% with %~dp0, which refers to the path containing the file. Then set start to a Windows command to execute the file, something like call ./file.bat.
Similarly, in a bash environment create a shell script and use $(dirname $0) instead of $PWD. Make it executable with chmod +x name_of_file and set start to bash ./name_of_file.
One last thing: if the name of the variable doesn't matter, package.json can tell npm to create environment variables prefixed by npm_config_. More info in the npm config documentation.
Related
I have an NPM script that looks like this:
"start": "server start --host $DEV_HOST"
There are lots of other scripts that also reference the $DEV_HOST env.
Currently $DEV_HOST is exported in my ~/.bashrc, but I would like to find a way of defining it locally to the project, for example in a .env file.
I wondered if npm offered a preall hook which would allow a simple bash file to be called which could load in the local envs, but no such hook exists.
It also appears that NPM doesn't offer any load mechanism for local envs out of the box.
The only solutions I can currently think of are:
Move all the scripts that need the local env to their own bash files and load in the .env file in each of these.
Call a separate script before each script that needs the local env that loads in the envs from the .env file.
Both of these seem unnecessarily cumbersome and don't scale well.
Is there any way of loading envs defined in a project-local file so that they are available for use in NPM scripts?
Note: I'm not asking how to load envs into an app. I'm asking how to make envs available to npm scripts. The two things are completely different. Dot-env adds envs to the current process. Even if you created a node script that used dot-env to load some envs, those envs wouldn't be available via $ variables as they are not loaded into the environment. They would only be available within the same process via process.env.
You can simply echo an environment variable in your NPM script.
For example, run the following on your command line in your project root:
export SOME_ENV=blalala
then in your package.json you can use it like so:
"scrips:" {
"print-env": "echo $SOME_ENV"
}
this outputs blalala to the console.
If you want to define environment variables in a .env to be available in your scripts use something like this: https://www.npmjs.com/package/better-npm-run
npm i better-npm-run
create your .env file then use the variables exactly as shown above
I have the following file for my nodejs project
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
# Replace with env variable
RUN envsubs < fil1 > file2
EXPOSE 8080
CMD [ "npm", "start" ]
I run the docker container with the -e flag providing the environment variable
But I do not see the replacement. Will the Run ccommand be excuted when the env variable is available?
Images are immutable
Dockerfile defines the build process for an image. Once built, the image is immutable (cannot be changed). Runtime variables are not something that would be baked into this immutable image. So Dockerfile is the wrong place to address this.
Using an entrypoint script
What you probably want to to do is override the default ENTRYPOINT with your own script, and have that script do something with environment variables. Since the entrypoint script would execute at runtime (when the container starts), this is the correct time to gather environment variables and do something with them.
First, you need to adjust your Dockerfile to know about an entrypoint script. While Dockerfile is not directly involved in handling the environment variable, it still needs to know about this script, because the script will be baked into your image.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["npm", "start"]
Now, write an entrypoint script which does whatever setup is needed before the command is run, and at the end, exec the command itself.
entrypoint.sh:
#!/bin/sh
# Where $ENVSUBS is whatever command you are looking to run
$ENVSUBS < file1 > file2
npm install
# This will exec the CMD from your Dockerfile, i.e. "npm start"
exec "$#"
Here, I have included npm install, since you asked about this in the comments. I will note that this will run npm install on every run. If that's appropriate, fine, but I wanted to point out it will run every time, which will add some latency to your startup time.
Now rebuild your image, so the entrypoint script is a part of it.
Using environment variables at runtime
The entrypoint script knows how to use the environment variable, but you still have to tell Docker to import the variable at runtime. You can use the -e flag to docker run to do so.
docker run -e "ENVSUBS=$ENVSUBS" <image_name>
Here, Docker is told to define an environment variable ENVSUBS, and the value it is assigned is the value of $ENVSUBS from the current shell environment.
How entrypoint scripts work
I'll elaborate a bit on this, because in the comments, it seemed you were a little foggy on how this fits together.
When Docker starts a container, it executes one (and only one) command inside the container. This command becomes PID 1, just like init or systemd on a typical Linux system. This process is responsible for running any other processes the container needs to have.
By default, the ENTRYPOINT is /bin/sh -c. You can override it in Dockerfile, or docker-compose.yml, or using the docker command.
When a container is started, Docker runs the entrypoint command, and passes the command (CMD) to it as an argument list. Earlier, we defined our own ENTRYPOINT as /entrypoint.sh. That means that in your case, this is what Docker will execute in the container when it starts:
/entrypoint.sh npm start
Because ["npm", "start"] was defined as the command, that is what gets passed as an argument list to the entrypoint script.
Because we defined an environment variable using the -e flag, this entrypoint script (and its children) will have access to that environment variable.
At the end of the entrypoint script, we run exec "$#". Because $# expands to the argument list passed to the script, this will run
exec npm start
And because exec runs its arguments as a command, replacing the current process with itself, when you are done, npm start becomes PID 1 in your container.
Why you can't use multiple CMDs
In the comments, you asked whether you can define multiple CMD entries to run multiple things.
You can only have one ENTRYPOINT and one CMD defined. These are not used at all during the build process. Unlike RUN and COPY, they are not executed during the build. They are added as metadata items to the image once it is built.
It is only later, when the image is run as a container, that these metadata fields are read, and used to start the container.
As mentioned earlier, the entrypoint is what is really run, and it is passed the CMD as an argument list. The reason they are separate is partly historical. In early versions of Docker, CMD was the only available option, and ENTRYPOINT was fixed as being /bin/sh -c. But due to situations like this one, Docker eventually allowed ENTRYPOINT to be defined by the user.
For images with bash as the default entrypoint, this is what I do to allow myself to run some scripts before shell start if needed:
FROM ubuntu
COPY init.sh /root/init.sh
RUN echo 'a=(${BEFORE_SHELL//:/ }); for c in ${a[#]}; do source $x; done' >> ~/.bashrc
and if you want to source a script at container login you pass its path in the environment variable BEFORE_SHELL. Example using docker-compose:
version: '3'
services:
shell:
build:
context: .
environment:
BEFORE_SHELL: '/root/init.sh'
Some remarks:
If BEFORE_SHELL is not set then nothing happens (we have the default behavior)
You can pass any script path available in the container, included mounted ones
The scripts are sourced so variables defined in the scripts will be available in the container
Multiple scripts can be passed (use a : to separate the paths)
Will the Run ccommand be excuted when the env variable is available?
Environnement variables set with -e flag are set when you run the container.
Problem is, Dockerfile is read on container build, so the RUN command will not be aware of thoses environnement variables.
The way to have environment variables set on build, is to add in your Dockerfile, ENV line. (https://docs.docker.com/engine/reference/builder/#/environment-replacement)
So your Dockerfile may be :
FROM node:latest
WORKDIR /src
ADD package.json .
ENV A YOLO
RUN echo "$A"
And the output :
$ docker build .
Sending build context to Docker daemon 2.56 kB
Step 1 : FROM node:latest
---> f5eca816b45d
Step 2 : WORKDIR /src
---> Using cache
---> 4ede3b23756d
Step 3 : ADD package.json .
---> Using cache
---> a4671a30bfe4
Step 4 : ENV A YOLO
---> Running in 7c325474af3c
---> eeefe2c8bc47
Removing intermediate container 7c325474af3c
Step 5 : RUN echo "$A"
---> Running in 35e0d85d8ce2
YOLO
---> 78d5df7d2322
You see at the before-last line when the RUN command launched, the container is aware the envrionment variable is set.
I had an extremely stubborn container that would not run anything on startup. This technique workd well, and took me a day to find as every single other possible technique failed.
Run docker inspect postgres to find entrypoint script. In this case, it was docker-entrypoint.sh. This might vary by container type and Docker version.
Open a shell into the container, then find the full path: find / -name docker-entrypoint.sh
Inspect the file: cat /usr/local/bin/docker-entrypoint.sh
In the Dockerfile, use SED to insert line 2 (using 2i).
# Insert into Dockerfile
RUN sed -i '2iecho Run on startup as user `whoami`.' /usr/local/bin/docker-entrypoint.sh
In my particular case, Docker ran this script twice on startup: first as root, then as user postgres. Can use the test to only run the command under root.
I have a Node.js application which is built into the docker image. In this application I have a config file with some API urls (API key, for example) which might change from time to time. Is it possible to launch the docker image with some additional parameter and then access this param from the node.js code (I assume this could be done through using environment vars) so as not to rebuild the image every time the value of this param should be changed. This is the pseudo code which I assume can be used:
docker run -p 8080:8080 paramApiKey="12345" mydockerimage
and then I'd like to access it from the node.js app:
var apiKey = process.env.paramApiKey
Can this somehow be achieved?
In order to define environment variables with docker at the time you use the run command, you have to use the -e flag and the format is should be "name=value", meaning that your ENV variable should be "paramApiKey=12345" so that you can access it by doing process.env.paramApiKey in your application.
That being said, your command would look like:
docker run -p 8080:8080 -e "paramApiKey=12345" mydockerimage
Sure, just try:
docker run -p 8080:8080 -e "paramApiKey=12345" mydockerimage
Just earlier, I posted my question:
https://stackoverflow.com/questions/28336443/how-to-not-put-my-js-files-in-user-myuser-for-node-js
I have a file, hello.js, located in /Users/MyUser/Desktop/Node/
I can see that my default directory is /Users/MyUser/
Okay, so I get that I need to change my working directory. What I have been able to find so far is to use >process.chrdir('/Users/MyUser/Desktop/Node/');
Cool, that works, but now when I get out of the REPL shell, the directory resets.
The person who responded to my question said that I needed to run >node init and later npm install <name of dependency> --save
My first question: I have ran >node init and see that I can create this package.json file, what does this do exactly?
Secondly: I was told that I need to add dependancies. Could someone please explain to me what this means in Node terms? Does a dependancy simply mean a folder that I want node to include? Do I want to add this Node folder on my Desktop to be able to run my scripts?
I am currently trying to go through the learnyounode courses, however I do not want to have to save all of these test files in my /User/MyUser directory, so any advice would be greatly appreciated.
Thanks
I have ran >node init and see that I can create
this package.json file, what does this do exactly?
npm init is used to create a package.json file interactively. This will ask you a bunch of questions, and then write a package.json for you.
package.json is just a file that handle the project's dependencies and holds various metadata relevant to the project[ project description, version, license information etc]
I was told that I need to add dependencies. Could someone please
explain to me what this means in Node terms?
Lets say you're building an application that is dependent on a number of NPM modules, you can specify them in your package.json file this way:
"dependencies": {
"express": "2.3.12",
"jade": ">= 0.0.1",
"redis": "0.6.0"
}
Now doing npm install would install a package, and any packages that it depends on.
A package is:
a folder containing a program described by a package.json file
a gzipped tarball containing (1)
a url that resolves to (2)
a # that is published on the registry with (3)
a # that points to (4)
a that has a "latest" tag satisfying (5)
a that resolves to (2)
If you need to install a dependency that haven't been included in package.json, simply do npm install <packageName>. Whether or not you want to include this newly installed package in package.json is completely your take. You can also decide how this newly installed package shall appear in your package.json
npm install <packageName> [--save|--save-dev|--save-optional]:
--save: Package will appear in your dependencies.
--save-dev: Package will appear in your devDependencies.
--save-optional: Package will appear in your optionalDependencies.
Does a dependency simply mean a folder that I want node to include?
Umm, partly yes. You may consider dependencies as folders, typically stored in node_modules directory.
Do I want to add this Node folder on my Desktop to be able to run my
scripts?
No, node manages it all. npm install will automatically create node_modules directory and you can refer to those dependencies with
require() in your .js files
var express = require('express');
Node REPL simply provides a way to interactively run JavaScript and see the results. It can be used for debugging, testing, or just trying things out.
process.cwd() points to the directory from which REPL itself has been initiated. You may change it using process.chdir('/path'), but once you close the REPL session and restart, it would always re-instantiate process.cwd() to the directory from which it has been started.
If you are installing some packages/dependencies in node project1 and think those dependencies can also be useful for node project2,
install them again for project2 (to get independentnode_modules directory)
install them globally [using -g flag]. see this
reference packages in project2 as
var referencedDependency = require('/home/User/project1/node_modules/<dependency>')
Simply doing process.chdir('/home/User/project1/node_modules/') in REPL and referencing as
var referencedDependency = require('<dependency>') in your js file wont work.
>process.chdir('/Users/MyUser/Desktop/Node/'); change the working directory only for that particular REPL session.
Hope it helps!
This has nothing to do with node.js but is rather inherent in the design of Unix (which in turn influences the design of shells on other operating systems).
Processes inherit values from their parent's environment but their environments are distinct.
That terse description of how process environments work has a sometimes unexpected behavior: you cannot change your parent's environment. It was designed this way explicitly for security reasons.
What this means is, when you change the working directory in a process and quits that process your shell's working directory will not be affected. Indeed, your shell's working directory isn't affected even when the process (in this case, node REPL) is running.
This exact question is often asked by people writing shell scripts wanting to write a script that CDs into someplace. But it's also common to find this question asked by people writing other languages such as Perl, Tcl, Ruby etc. (even C).
The answer to this question is always the same regardless of language: it's not possible to CD from another program/script/process.
I'm not sure how Windows handles it so it may be possible to do it there. But it's not possible on Unixen.
When executing a NodeJS App in the command prompt, you usually do this:
node myscript.js
Is it possible to do something like this?
node http://www.myhostedsite.com/myscript.js
I'd use wget http://www.myhostedsite.com/myscript.js && node myscript.js to avoid the differences in the REPL (such as _ being a reference to the last return value, which can break libraries like underscore.js).
You've also tagged socket.io in your question, which implies you might need some dependencies to be available as well. In that case you may need to install those dependencies globally with npm install -g <dependency name> in order for the script to run successfully.