Related
I have a Nginx running inside a docker container. I have a MySql running on the host system. I want to connect to the MySql from within my container. MySql is only binding to the localhost device.
Is there any way to connect to this MySql or any other program on localhost from within this docker container?
This question is different from "How to get the IP address of the docker host from inside a docker container" due to the fact that the IP address of the docker host could be the public IP or the private IP in the network which may or may not be reachable from within the docker container (I mean public IP if hosted at AWS or something). Even if you have the IP address of the docker host it does not mean you can connect to docker host from within the container given that IP address as your Docker network may be overlay, host, bridge, macvlan, none etc which restricts the reachability of that IP address.
Edit:
If you are using Docker-for-mac or Docker-for-Windows 18.03+, connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).
If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker container with the --add-host host.docker.internal:host-gateway option.
Otherwise, read below
TLDR
Use --network="host" in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.
Note: This mode only works on Docker for Linux, per the documentation.
Note on docker container networking modes
Docker offers different networking modes when running containers. Depending on the mode you choose you would connect to your MySQL database running on the docker host differently.
docker run --network="bridge" (default)
Docker creates a bridge named docker0 by default. Both the docker host and the docker containers have an IP address on that bridge.
on the Docker host, type sudo ip addr show docker0 you will have an output looking like:
[vagrant#docker:~] $ sudo ip addr show docker0
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::5484:7aff:fefe:9799/64 scope link
valid_lft forever preferred_lft forever
So here my docker host has the IP address 172.17.42.1 on the docker0 network interface.
Now start a new container and get a shell on it: docker run --rm -it ubuntu:trusty bash and within the container type ip addr show eth0 to discover how its main network interface is set up:
root#e77f6a1b3740:/# ip addr show eth0
863: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 66:32:13:f0:f1:e3 brd ff:ff:ff:ff:ff:ff
inet 172.17.1.192/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::6432:13ff:fef0:f1e3/64 scope link
valid_lft forever preferred_lft forever
Here my container has the IP address 172.17.1.192. Now look at the routing table:
root#e77f6a1b3740:/# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.42.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0
So the IP Address of the docker host 172.17.42.1 is set as the default route and is accessible from your container.
root#e77f6a1b3740:/# ping 172.17.42.1
PING 172.17.42.1 (172.17.42.1) 56(84) bytes of data.
64 bytes from 172.17.42.1: icmp_seq=1 ttl=64 time=0.070 ms
64 bytes from 172.17.42.1: icmp_seq=2 ttl=64 time=0.201 ms
64 bytes from 172.17.42.1: icmp_seq=3 ttl=64 time=0.116 ms
docker run --network="host"
Alternatively you can run a docker container with network settings set to host. Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host.
Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option.
IP config on my docker host:
[vagrant#docker:~] $ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:98:dc:aa brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe98:dcaa/64 scope link
valid_lft forever preferred_lft forever
and from a docker container in host mode:
[vagrant#docker:~] $ docker run --rm -it --network=host ubuntu:trusty ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:98:dc:aa brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe98:dcaa/64 scope link
valid_lft forever preferred_lft forever
As you can see both the docker host and docker container share the exact same network interface and as such have the same IP address.
Connecting to MySQL from containers
bridge mode
To access MySQL running on the docker host from containers in bridge mode, you need to make sure the MySQL service is listening for connections on the 172.17.42.1 IP address.
To do so, make sure you have either bind-address = 172.17.42.1 or bind-address = 0.0.0.0 in your MySQL config file (my.cnf).
If you need to set an environment variable with the IP address of the gateway, you can run the following code in a container :
export DOCKER_HOST_IP=$(route -n | awk '/UG[ \t]/{print $2}')
then in your application, use the DOCKER_HOST_IP environment variable to open the connection to MySQL.
Note: if you use bind-address = 0.0.0.0 your MySQL server will listen for connections on all network interfaces. That means your MySQL server could be reached from the Internet ; make sure to set up firewall rules accordingly.
Note 2: if you use bind-address = 172.17.42.1 your MySQL server won't listen for connections made to 127.0.0.1. Processes running on the docker host that would want to connect to MySQL would have to use the 172.17.42.1 IP address.
host mode
To access MySQL running on the docker host from containers in host mode, you can keep bind-address = 127.0.0.1 in your MySQL configuration and connect to 127.0.0.1 from your containers:
[vagrant#docker:~] $ docker run --rm -it --network=host mysql mysql -h 127.0.0.1 -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 36
Server version: 5.5.41-0ubuntu0.14.04.1 (Ubuntu)
Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
note: Do use mysql -h 127.0.0.1 and not mysql -h localhost; otherwise the MySQL client would try to connect using a unix socket.
For all platforms
Docker v 20.10 and above (since December 14th 2020)
Use your internal IP address or connect to the special DNS name host.docker.internal which will resolve to the internal IP address used by the host.
On Linux, using the Docker command, add --add-host=host.docker.internal:host-gateway to your Docker command to enable this feature.
To enable this in Docker Compose on Linux, add the following lines to the container definition:
extra_hosts:
- "host.docker.internal:host-gateway"
For older macOS and Windows versions of Docker
Docker v 18.03 and above (since March 21st 2018)
Use your internal IP address or connect to the special DNS name host.docker.internal which will resolve to the internal IP address used by the host.
Linux support pending https://github.com/docker/for-linux/issues/264
For older macOS versions of Docker
Docker for Mac v 17.12 to v 18.02
Same as above but use docker.for.mac.host.internal instead.
Docker for Mac v 17.06 to v 17.11
Same as above but use docker.for.mac.localhost instead.
Docker for Mac 17.05 and below
To access host machine from the docker container you must attach an IP alias to your network interface. You can bind whichever IP you want, just make sure you're not using it to anything else.
sudo ifconfig lo0 alias 123.123.123.123/24
Then make sure that you server is listening to the IP mentioned above or 0.0.0.0. If it's listening on localhost 127.0.0.1 it will not accept the connection.
Then just point your docker container to this IP and you can access the host machine!
To test you can run something like curl -X GET 123.123.123.123:3000 inside the container.
The alias will reset on every reboot so create a start-up script if necessary.
Solution and more documentation here: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
Use
host.docker.internal
instead of
localhost
I doing a hack similar to above posts of get the local IP to map to a alias name (DNS) in the container. The major problem is to get dynamically with a simple script that works both in Linux and OSX the host IP address. I did this script that works in both environments (even in Linux distribution with "$LANG" != "en_*" configured):
ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1
So, using Docker Compose, the full configuration will be:
Startup script (docker-run.sh):
export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)
docker-compose -f docker-compose.yml up
docker-compose.yml:
myapp:
build: .
ports:
- "80:80"
extra_hosts:
- "dockerhost:$DOCKERHOST"
Then change http://localhost to http://dockerhost in your code.
For a more advance guide of how to customize the DOCKERHOST script, take a look at this post with a explanation of how it works.
Solution for Linux (kernel >=3.6).
Ok, your localhost server has a default docker interface docker0 with IP address 172.17.0.1. Your container started with default network settings --net="bridge".
Enable route_localnet for docker0 interface:
$ sysctl -w net.ipv4.conf.docker0.route_localnet=1
Add these rules to iptables:
$ iptables -t nat -I PREROUTING -i docker0 -d 172.17.0.1 -p tcp --dport 3306 -j DNAT --to 127.0.0.1:3306
$ iptables -t filter -I INPUT -i docker0 -d 127.0.0.1 -p tcp --dport 3306 -j ACCEPT
Create MySQL user with access from '%' that means - from anyone, excluding localhost:
CREATE USER 'user'#'%' IDENTIFIED BY 'password';
Change in your script the mysql-server address to 172.17.0.1.
From the kernel documentation:
route_localnet - BOOLEAN: Do not consider loopback addresses as martian source or destination while routing. This enables the use of 127/8 for local routing purposes (default FALSE).
This worked for me on an NGINX/PHP-FPM stack without touching any code or networking where the app's just expecting to be able to connect to localhost
Mount mysqld.sock from the host to inside the container.
Find the location of the mysql.sock file on the host running mysql:
netstat -ln | awk '/mysql(.*)?\.sock/ { print $9 }'
Mount that file to where it's expected in the docker:
docker run -v /hostpath/to/mysqld.sock:/containerpath/to/mysqld.sock
Possible locations of mysqld.sock:
/tmp/mysqld.sock
/var/run/mysqld/mysqld.sock
/var/lib/mysql/mysql.sock
/Applications/MAMP/tmp/mysql/mysql.sock # if running via MAMP
Until host.docker.internal is working for every platform, you can use my container acting as a NAT gateway without any manual setup:
https://github.com/qoomon/docker-host
Simplest solution for Mac OSX
Just use the IP address of your Mac. On the Mac run this to get the IP address and use it from within the container:
ifconfig | grep 'inet 192'| awk '{ print $2}'
As long as the server running locally on your Mac or in another docker container is listening to 0.0.0.0, the docker container will be able to reach out at that address.
If you just want to access another docker container that is listening on 0.0.0.0 you can use 172.17.0.1
Very simple and quick, check your host IP with ifconfig (linux) or ipconfig (windows) and then create a docker-compose.yml:
version: '3' # specify docker-compose version
services:
nginx:
build: ./ # specify the directory of the Dockerfile
ports:
- "8080:80" # specify port mapping
extra_hosts:
- "dockerhost:<yourIP>"
This way, your container will be able to access your host. When accessing your DB, remember to use the name you specified before, in this case dockerhost and the port of your host in which the DB is running.
Several solutions come to mind:
Move your dependencies into containers first
Make your other services externally accessible and connect to them with that external IP
Run your containers without network isolation
Avoid connecting over the network, use a socket that is mounted as a volume instead
The reason this doesn't work out of the box is that containers run with their own network namespace by default. That means localhost (or 127.0.0.1 pointing to the loopback interface) is unique per container. Connecting to this will connect to the container itself, and not services running outside of docker or inside of a different docker container.
Option 1: If your dependency can be moved into a container, I would do this first. It makes your application stack portable as others try to run your container on their own environment. And you can still publish the port on your host where other services that have not been migrated can still reach it. You can even publish the port to the localhost interface on your docker host to avoid it being externally accessible with a syntax like: -p 127.0.0.1:3306:3306 for the published port.
Option 2: There are a variety of ways to detect the host IP address from inside of the container, but each have a limited number of scenarios where they work (e.g. requiring Docker for Mac). The most portable option is to inject your host IP into the container with something like an environment variable or configuration file, e.g.:
docker run --rm -e "HOST_IP=$(ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p')" ...
This does require that your service is listening on that external interface, which could be a security concern. For other methods to get the host IP address from inside of the container, see this post.
Slightly less portable is to use host.docker.internal. This works in current versions of Docker for Windows and Docker for Mac. And in 20.10, the capability has been added to Docker for Linux when you pass a special host entry with:
docker run --add-host host.docker.internal:host-gateway ...
The host-gateway is a special value added in Docker 20.10 that automatically expands to a host IP. For more details see this PR.
Option 3: Running without network isolation, i.e. running with --net host, means your application is running on the host network namespace. This is less isolation for the container, and it means you cannot access other containers over a shared docker network with DNS (instead, you need to use published ports to access other containerized applications). But for applications that need to access other services on the host that are only listening on 127.0.0.1 on the host, this can be the easiest option.
Option 4: Various services also allow access over a filesystem based socket. This socket can be mounted into the container as a bind mounted volume, allowing you to access the host service without going over the network. For access to the docker engine, you often see examples of mounting /var/run/docker.sock into the container (giving that container root access to the host). With mysql, you can try something like -v /var/run/mysqld/mysqld.sock:/var/run/mysqld/mysql.sock and then connect to localhost which mysql converts to using the socket.
Solution for Windows 10
Docker Community Edition 17.06.0-ce-win18 2017-06-28 (stable)
You can use DNS name of the host docker.for.win.localhost, to resolve to the internal IP. (Warning some sources mentioned windows but it should be win)
Overview
I needed to do something similar, that is connect from my Docker container to my localhost, which was running the Azure Storage Emulator and CosmosDB Emulator.
The Azure Storage Emulator by default listens on 127.0.0.1, while you can change the IP its bound too, I was looking for a solution that would work with default settings.
This also works for connecting from my Docker container to SQL Server and IIS, both running locally on my host with default port settings.
For windows,
I have changed the database url in spring configuration: spring.datasource.url=jdbc:postgresql://host.docker.internal:5432/apidb
Then build the image and run. It worked for me.
None of the answers worked for me when using Docker Toolbox on Windows 10 Home, but 10.0.2.2 did, since it uses VirtualBox which exposes the host to the VM on this address.
This is not an answer to the actual question. This is how I solved a similar problem. The solution comes totally from: Define Docker Container Networking so Containers can Communicate. Thanks to Nic Raboy
Leaving this here for others who might want to do REST calls between one container and another. Answers the question: what to use in place of localhost in a docker environment?
Get how your network looks like docker network ls
Create a new network docker network create -d my-net
Start the first container docker run -d -p 5000:5000 --network="my-net" --name "first_container" <MyImage1:v0.1>
Check out network settings for first container docker inspect first_container. "Networks": should have 'my-net'
Start the second container docker run -d -p 6000:6000 --network="my-net" --name "second_container" <MyImage2:v0.1>
Check out network settings for second container docker inspect second_container. "Networks": should have 'my-net'
ssh into your second container docker exec -it second_container sh or docker exec -it second_container bash.
Inside of the second container, you can ping the first container by ping first_container. Also, your code calls such as http://localhost:5000 can be replaced by http://first_container:5000
If you're running with --net=host, localhost should work fine. If you're using default networking, use the static IP 172.17.0.1.
See this - https://stackoverflow.com/a/48547074/14120621
For those on Windows, assuming you're using the bridge network driver, you'll want to specifically bind MySQL to the IP address of the hyper-v network interface.
This is done via the configuration file under the normally hidden C:\ProgramData\MySQL folder.
Binding to 0.0.0.0 will not work. The address needed is shown in the docker configuration as well, and in my case was 10.0.75.1.
Edit: I ended up prototyping out the concept on GitHub. Check out: https://github.com/sivabudh/system-in-a-box
First, my answer is geared towards 2 groups of people: those who use a Mac, and those who use Linux.
The host network mode doesn't work on a Mac. You have to use an IP alias, see: https://stackoverflow.com/a/43541681/2713729
What is a host network mode? See: https://docs.docker.com/engine/reference/run/#/network-settings
Secondly, for those of you who are using Linux (my direct experience was with Ubuntu 14.04 LTS and I'm upgrading to 16.04 LTS in production soon), yes, you can make the service running inside a Docker container connect to localhost services running on the Docker host (eg. your laptop).
How?
The key is when you run the Docker container, you have to run it with the host mode. The command looks like this:
docker run --network="host" -id <Docker image ID>
When you do an ifconfig (you will need to apt-get install net-tools your container for ifconfig to be callable) inside your container, you will see that the network interfaces are the same as the one on Docker host (eg. your laptop).
It's important to note that I'm a Mac user, but I run Ubuntu under Parallels, so using a Mac is not a disadvantage. ;-)
And this is how you connect NGINX container to the MySQL running on a localhost.
For Linux, where you cannot change the interface the localhost service binds to
There are two problems we need to solve
Getting the IP of the host
Making our localhost service available to Docker
The first problem can be solved using qoomon's docker-host image, as given by other answers.
You will need to add this container to the same bridge network as your other container so that you can access it. Open a terminal inside your container and ensure that you can ping dockerhost.
bash-5.0# ping dockerhost
PING dockerhost (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.523 ms
Now, the harder problem, making the service accessible to docker.
We can use telnet to check if we can access a port on the host (you may need to install this).
The problem is that our container will only be able to access services that bind to all interfaces, such as SSH:
bash-5.0# telnet dockerhost 22
SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
But services bound only to localhost will be inaccessible:
bash-5.0# telnet dockerhost 1025
telnet: can't connect to remote host (172.20.0.2): Connection refused
The proper solution here would be to bind the service to dockers bridge network. However, this answer assumes that it is not possible for you to change this. So we will instead use iptables.
First, we need to find the name of the bridge network that docker is using with ifconfig. If you are using an unnamed bridge, this will just be docker0. However, if you are using a named network you will have a bridge starting with br- that docker will be using instead. Mine is br-5cd80298d6f4.
Once we have the name of this bridge, we need to allow routing from this bridge to localhost. This is disabled by default for security reasons:
sysctl -w net.ipv4.conf.<bridge_name>.route_localnet=1
Now to set up our iptables rule. Since our container can only access ports on the docker bridge network, we are going to pretend that our service is actually bound to a port on this network.
To do this, we will forward all requests to <docker_bridge>:port to localhost:port
iptables -t nat -A PREROUTING -p tcp -i <docker_bridge_name> --dport <service_port> -j DNAT --to-destination 127.0.0.1:<service_port>
For example, for my service on port 1025
iptables -t nat -A PREROUTING -p tcp -i br-5cd80298d6f4 --dport 1025 -j DNAT --to-destination 127.0.0.1:1025
You should now be able to access your service from the container:
bash-5.0# telnet dockerhost 1025
220 127.0.0.1 ESMTP Service Ready
First see this answer for the options that you have to fix this problem. But if you use docker-compose you can add network_mode: host to your service and then use 127.0.0.1 to connect to the local host. This is just one of the options described in the answer above. Below you can find how I modified docker-compose.yml from https://github.com/geerlingguy/php-apache-container.git:
---
version: "3"
services:
php-apache:
+ network_mode: host
image: geerlingguy/php-apache:latest
container_name: php-apache
...
+ indicates the line I added.
[Additional info] This has also worked in version 2.2. and "host" or just 'host' are both worked in docker-compose.
---
version: "2.2"
services:
php-apache:
+ network_mode: "host"
or
+ network_mode: host
...
I disagree with the answer from Thomasleveil.
Making mysql bind to 172.17.42.1 will prevent other programs using the database on the host to reach it. This will only work if all your database users are dockerized.
Making mysql bind to 0.0.0.0 will open the db to outside world, which is not only a very bad thing to do, but also contrary to what the original question author wants to do. He explicitly says "The MySql is running on localhost and not exposing a port to the outside world, so its bound on localhost"
To answer the comment from ivant
"Why not bind mysql to docker0 as well?"
This is not possible. The mysql/mariadb documentation explicitly says it is not possible to bind to several interfaces. You can only bind to 0, 1, or all interfaces.
As a conclusion, I have NOT found any way to reach the (localhost only) database on the host from a docker container. That definitely seems like a very very common pattern, but I don't know how to do it.
Try this:
version: '3.5'
services:
yourservice-here:
container_name: container_name
ports:
- "4000:4000"
extra_hosts: # <---- here
- localhost:192.168.1.202
- or-vitualhost.local:192.168.1.202
To get 192.168.1.202, uses ifconfig
This worked for me. Hope this help!
In 7 years the question was asked, it is either docker has changed, or no one tried this way. So I will include my own answer.
I have found all answers use complex methods. Today, I have needed this, and found 2 very simple ways:
use ipconfig or ifconfig on your host and make note of all IP addresses. At least two of them can be used by the container.
I have a fixed local network address on WiFi LAN Adapter: 192.168.1.101. This could be 10.0.1.101. the result will change depending on your router
I use WSL on windows, and it has its own vEthernet address: 172.19.192.1
use host.docker.internal. Most answers have this or another form of it depending on OS. The name suggests it is now globally used by docker.
A third option is to use WAN address of the machine, or in other words IP given by the service provider. However, this may not work if IP is not static, and requires routing and firewall settings.
You need to know the gateway! My solution with local server was to expose it under 0.0.0.0:8000, then run docker with subnet and run container like:
docker network create --subnet=172.35.0.0/16 --gateway 172.35.0.1 SUBNET35
docker run -d -p 4444:4444 --net SUBNET35 <container-you-want-run-place-here>
So, now you can access your loopback through http://172.35.0.1:8000
Connect to the gateway address.
❯ docker network inspect bridge | grep Gateway
"Gateway": "172.17.0.1"
Make sure the process on the host is listening on this interface or on all interfaces and is started after docker. If using systemd, you can add the below to make sure it is started after docker.
[Unit]
After=docker.service
Example
❯ python -m http.server &> /dev/null &
[1] 149976
❯ docker run --rm python python -c "from urllib.request import urlopen;print(b'Directory listing for' in urlopen('http://172.17.0.1:8000').read())"
True
Here is my solution : it works for my case
set local mysql server to public access by comment
#bind-address = 127.0.0.1
in /etc/mysql/mysql.conf.d
restart mysql server
sudo /etc/init.d/mysql restart
run the following command to open user root access any host
mysql -uroot -proot
GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY 'root' WITH
GRANT OPTION;
FLUSH PRIVILEGES;
create sh script : run_docker.sh
#!bin/bash
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut -d / -f 1`
docker run -it -d --name web-app \
--add-host=local:${HOSTIP} \
-p 8080:8080 \
-e DATABASE_HOST=${HOSTIP} \
-e DATABASE_PORT=3306 \
-e DATABASE_NAME=demo \
-e DATABASE_USER=root \
-e DATABASE_PASSWORD=root \
sopheamak/springboot_docker_mysql
run with docker-composer
version: '2.1'
services:
tomcatwar:
extra_hosts:
- "local:10.1.2.232"
image: sopheamak/springboot_docker_mysql
ports:
- 8080:8080
environment:
- DATABASE_HOST=local
- DATABASE_USER=root
- DATABASE_PASSWORD=root
- DATABASE_NAME=demo
- DATABASE_PORT=3306
You can get the host ip using alpine image
docker run --rm alpine ip route | awk 'NR==1 {print $3}'
This would be more consistent as you're always using alpine to run the command.
Similar to Mariano's answer you can use same command to set an environment variable
DOCKER_HOST=$(docker run --rm alpine ip route | awk 'NR==1 {print $3}') docker-compose up
you can use net alias for your machine
OSX
sudo ifconfig lo0 alias 123.123.123.123/24 up
LINUX
sudo ifconfig lo:0 123.123.123.123 up
then from the container you can see the machine by 123.123.123.123
The CGroups and Namespaces are playing major role in the Container Ecosystem.
Namespace provide a layer of isolation. Each container runs in a separate namespace and its access is limited to that namespace. The Cgroups controls the resource utilization of each container, whereas Namespace controls what a process can see and access the respective resource.
Here is the basic understanding of the solution approach you could follow,
Use Network Namespace
When a container spawns out of image, a network interface is defined and create. This gives the container unique IP address and interface.
$ docker run -it alpine ifconfig
By changing the namespace to host, cotainers networks does not remain isolated to its interface, the process will have access to host machines network interface.
$ docker run -it --net=host alpine ifconfig
If the process listens on ports, they'll be listened on the host interface and mapped to the container.
Use PID Namespace
By changing the Pid namespace allows a container to interact with other process beyond its normal scope.
This container will run in its own namespace.
$ docker run -it alpine ps aux
By changing the namespace to the host, the container can also see all the other processes running on the system.
$ docker run -it --pid=host alpine ps aux
Sharing Namespace
This is a bad practice to do this in production because you are breaking out of the container security model which might open up for vulnerabilities, and easy access to eavesdropper. This is only for debugging tools and understating the loopholes in container security.
The first container is nginx server. This will create a new network and process namespace. This container will bind itself to port 80 of newly created network interface.
$ docker run -d --name http nginx:alpine
Another container can now reuse this namespace,
$ docker run --net=container:http mohan08p/curl curl -s localhost
Also, this container can see the interface with the processes in a shared container.
$ docker run --pid=container:http alpine ps aux
This will allow you give more privileges to containers without changing or restarting the application. In the similar way you can connect to mysql on host, run and debug your application. But, its not recommend to go by this way. Hope it helps.
Until fix is not merged into master branch, to get host IP just run from inside of the container:
ip -4 route list match 0/0 | cut -d' ' -f3
(as suggested by #Mahoney here).
I solved it by creating a user in MySQL for the container's ip:
$ sudo mysql<br>
mysql> create user 'username'#'172.17.0.2' identified by 'password';<br>
Query OK, 0 rows affected (0.00 sec)
mysql> grant all privileges on database_name.* to 'username'#'172.17.0.2' with grant option;<br>
Query OK, 0 rows affected (0.00 sec)
$ sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf
<br>bind-address = 172.17.0.1
$ sudo systemctl restart mysql.service
Then on container: jdbc:mysql://<b>172.17.0.1</b>:3306/database_name
I setup a single node Kafka Docker container on my local machine like it is described in the Confluent documentation (steps 2-3).
In addition, I also exposed Zookeeper's port 2181 and Kafka's port 9092 so that I'll be able to connect to them from a client running on local machine:
$ docker run -d \
-p 2181:2181 \
--net=confluent \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=2181 \
confluentinc/cp-zookeeper:4.1.0
$ docker run -d \
--net=confluent \
--name=kafka \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
Problem: When I try to connect to Kafka from the host machine, the connection fails because it can't resolve address: kafka:9092.
Here is my Java code:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();
The exception:
java.io.IOException: Can't resolve address: kafka:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
... 7 common frames omitted
Question: How to connect to Kafka running in Docker? My code is running from host machine, not Docker.
Note: I know that I could theoretically play around with DNS setup and /etc/hosts but it is a workaround - it shouldn't be like that.
There is also similar question here, however it is based on ches/kafka image. I use confluentinc based image which is not the same.
Disclaimer
tl;dr - A simple port forward from the container to the host will not work, and no hosts files should be modified. What exact IP/hostname + port do you want to connect to? Make sure that value is set as advertised.listeners on the broker. Make sure that address and the servers listed as part of bootstrap.servers are actually resolvable (ping an IP/hostname, use netcat to check ports...)
To verify the ports are mapped correctly on the host, ensure that docker ps shows the kafka container is mapped from 0.0.0.0:<host_port> -> <advertised_listener_port>/tcp. The ports must match if trying to run a client from outside the Docker network.
The below answer uses confluentinc docker images to address the question that was asked, not wurstmeister/kafka. More specifically, the latter images are not well-maintained despite being the one of the most popular Kafka docker image.
The following sections try to aggregate all the details needed to use another image. For other, commonly used Kafka images, it's all the same Apache Kafka running in a container.
You're just dependent on how it is configured. And which variables make it so.
wurstmeister/kafka
Refer their README section on listener configuration, Also read their Connectivity wiki.
bitnami/kafka
If you want a small container, try these. The images are much smaller than the Confluent ones and are much more well maintained than wurstmeister. Refer their README for listener configuration.
debezium/kafka
Docs on it are mentioned here.
Note: advertised host and port settings are deprecated. Advertised listeners covers both. Similar to the Confluent containers, Debezium can use KAFKA_ prefixed broker settings to update its properties.
Others
spotify/kafka is deprecated and outdated.
fast-data-dev or lensesio/box are great for an all in one solution, but are bloated if you only want Kafka
Your own Dockerfile - Why? Is something incomplete with these others? Start with a pull request, not starting from scratch.
For supplemental reading, a fully-functional docker-compose, and network diagrams, see this blog by #rmoff
Answer
The Confluent quickstart (Docker) document assumes all produce and consume requests will be within the Docker network.
You could fix the problem of connecting to kafka:9092 by running your Kafka client code within its own container as that uses the Docker network bridge, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.
First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT that will map the listener protocol to a Kafka protocol
Key: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
Value: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
Then setup two advertised listeners on different ports. (kafka here refers to the docker container name; it might also be named broker, so double check your service + hostnames).
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
Notice the protocols here match the left-side values of the protocol mapping setting above
When running the container, add -p 29092:29092 for the host port mapping, and advertised PLAINTEXT_HOST listener.
tl;dr
(with the above settings)
If something still doesn't work, KAFKA_LISTENERS can be set to include <PROTOCOL>://0.0.0.0:<PORT> where both options match the advertised setting and Docker-forwarded port
Client on same machine, not in a container
Advertising localhost and the associated port will let you connect outside of the container, as you'd expect.
In other words, when running any Kafka Client outside the Docker network (including CLI tools you might have installed locally), use localhost:29092 for bootstrap servers and localhost:2181 for Zookeeper (requires Docker port forwarding)
Client on another machine
If trying to connect from an external server, you'll need to advertise the external hostname/ip (e.g. 192.168.x.y) of the host as well as/in place of localhost.
Simply advertising localhost with a port forward will not work because Kafka protocol will still continue to advertise the listeners you've configured.
This setup requires Docker port forwarding and router port forwarding (and firewall / security group changes) if not in the same local network, for example, your container is running in the cloud and you want to interact with it from your local machine.
Client (or another broker) in a container, on the same host
This is the least error-prone configuration; you can use DNS service names directly.
When running an app in the Docker network, use kafka:9092 (see advertised PLAINTEXT listener config above) for bootstrap servers and zookeeper:2181 for Zookeeper, just like any other Docker service communication (doesn't require any port forwarding)
If you use separate docker run commands, or Compose files, you need to define a shared network manually
See the example Compose file for the full Confluent stack or more minimal one for a single broker.
If using multiple brokers, then they need to use unique hostnames + advertised listeners. See example
Related question
Connect to Kafka on host from Docker (ksqlDB)
Appendix
For anyone interested in Kubernetes deployments:
Accessing Kafka
Operators (recommended): https://operatorhub.io/?keyword=Kafka
Helm Artifact Hub: https://artifacthub.io/packages/search?ts_query_web=kafka&sort=stars&page=1
When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.
Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS which will be used by kafka to tell the world how it can be accessed.
Now for your use-case, there is multiple small stuff to think about:
Let say you set plaintext://kafka:9092
This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with kafka that is resolvable through the docker network.
If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the kafka name cannot be resolved.
==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka name to the container ip in each /etc/hosts
If you set plaintext://localhost:9092
This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)
==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)
Last option : set an IP in the name: plaintext://x.y.z.a:9092 ( kafka advertised url cannot be 0.0.0.0 as stated in the doc https://kafka.apache.org/documentation/#brokerconfigs_advertised.listeners )
This will be ok for everybody... BUT how can you get the x.y.z.a name ?
The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z .... Note that you need to adapt the ip to one valid ip in the confluent subnet.
before zookeeper
docker container run --name zookeeper -p 2181:2181 zookeeper
after kafka
docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka
in kafka consumer and producer config
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
I run my project with these regulations. Good luck dude.
the simplest way to solve this is adding a custom hostname to your broker using -h option
docker run -d \
--net=confluent \
--name=kafka \
-h broker-1 \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
and edit your /etc/hosts
127.0.0.1 broker-1
and use:
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9092");
This allows me to access localhost:9092 in Kafka applications on my M1 Mac
Key: KAFKA_ADVERTISED_LISTENERS
Value: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
plus port forwarding :
ports
- "9092:9092"
Finally, again, for my set up, I have to set listeners key this way
Key: KAFKA_LISTENERS
Value: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
Since this post has gotten a lot of attention over the years, I've listed the top solutions per platform at the bottom of this post.
Original post:
I want my node.js server to run in the background, i.e.: when I close my terminal I want my server to keep running. I've googled this and came up with this tutorial, however it doesn't work as intended. So instead of using that daemon script, I thought I just used the output redirection (the 2>&1 >> file part), but this too does not exit - I get a blank line in my terminal, like it's waiting for output/errors.
I've also tried to put the process in the background, but as soon as I close my terminal the process is killed as well.
So how can I leave it running when I shut down my local computer?
Top solutions:
Systemd (Linux)
Launchd (Mac)
node-windows (Windows)
PM2 (Node.js)
Copying my own answer from How do I run a Node.js application as its own process?
2015 answer: nearly every Linux distro comes with systemd, which means forever, monit, PM2, etc are no longer necessary - your OS already handles these tasks.
Make a myapp.service file (replacing 'myapp' with your app's name, obviously):
[Unit]
Description=My app
[Service]
ExecStart=/var/www/myapp/app.js
Restart=always
User=nobody
# Note Debian/Ubuntu uses 'nogroup', RHEL/Fedora uses 'nobody'
Group=nogroup
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/var/www/myapp
[Install]
WantedBy=multi-user.target
Note if you're new to Unix: /var/www/myapp/app.js should have #!/usr/bin/env node on the very first line and have the executable mode turned on chmod +x app.js.
Copy your service file into the /etc/systemd/system.
Start it with systemctl start myapp.
Enable it to run on boot with systemctl enable myapp.
See logs with journalctl -u myapp
This is taken from How we deploy node apps on Linux, 2018 edition, which also includes commands to generate an AWS/DigitalOcean/Azure CloudConfig to build Linux/node servers (including the .service file).
You can use Forever, A simple CLI tool for ensuring that a given node script runs continuously (i.e. forever):
https://www.npmjs.org/package/forever
UPDATE - As mentioned in one of the answers below, PM2 has some really nice functionality missing from forever. Consider using it.
Original Answer
Use nohup:
nohup node server.js &
EDIT I wanted to add that the accepted answer is really the way to go. I'm using forever on instances that need to stay up. I like to do npm install -g forever so it's in the node path and then just do forever start server.js
This might not be the accepted way, but I do it with screen, especially while in development because I can bring it back up and fool with it if necessary.
screen
node myserver.js
>>CTRL-A then hit D
The screen will detach and survive you logging off. Then you can get it back back doing screen -r. Hit up the screen manual for more details. You can name the screens and whatnot if you like.
2016 Update:
The node-windows/mac/linux series uses a common API across all operating systems, so it is absolutely a relevant solution. However; node-linux generates systemv init files. As systemd continues to grow in popularity, it is realistically a better option on Linux. PR's welcome if anyone wants to add systemd support to node-linux :-)
Original Thread:
This is a pretty old thread now, but node-windows provides another way to create background services on Windows. It is loosely based on the nssm concept of using an exe wrapper around your node script. However; it uses winsw.exe instead and provides a configurable node wrapper for more granular control over how the process starts/stops on failures. These processes are available like any other service:
The module also bakes in some event logging:
Daemonizing your script is accomplished through code. For example:
var Service = require('node-windows').Service;
// Create a new service object
var svc = new Service({
name:'Hello World',
description: 'The nodejs.org example web server.',
script: 'C:\\path\\to\\my\\node\\script.js'
});
// Listen for the "install" event, which indicates the
// process is available as a service.
svc.on('install',function(){
svc.start();
});
// Listen for the "start" event and let us know when the
// process has actually started working.
svc.on('start',function(){
console.log(svc.name+' started!\nVisit http://127.0.0.1:3000 to see it in action.');
});
// Install the script as a service.
svc.install();
The module supports things like capping restarts (so bad scripts don't hose your server) and growing time intervals between restarts.
Since node-windows services run like any other, it is possible to manage/monitor the service with whatever software you already use.
Finally, there are no make dependencies. In other words, a straightforward npm install -g node-windows will work. You don't need Visual Studio, .NET, or node-gyp magic to install this. Also, it's MIT and BSD licensed.
In full disclosure, I'm the author of this module. It was designed to relieve the exact pain the OP experienced, but with tighter integration into the functionality the Operating System already provides. I hope future viewers with this same question find it useful.
If you simply want to run the script uninterrupted until it completes you can use nohup as already mentioned in the answers here. However, none of the answers provide a full command that also logs stdin and stdout.
nohup node index.js >> app.log 2>&1 &
The >> means append to app.log.
2>&1 makes sure that errors are also send to stdout and added to the app.log.
The ending & makes sure your current terminal is disconnected from command so you can continue working.
If you want to run a node server (or something that should start back up when the server restarts) you should use systemd / systemctl.
UPDATE: i updated to include the latest from pm2:
for many use cases, using a systemd service is the simplest and most appropriate way to manage a node process. for those that are running numerous node processes or independently-running node microservices in a single environment, pm2 is a more full featured tool.
https://github.com/unitech/pm2
http://pm2.io
it has a really useful monitoring feature -> pretty 'gui' for command line monitoring of multiple processes with pm2 monit or process list with pm2 list
organized Log management -> pm2 logs
other stuff:
Behavior configuration
Source map support
PaaS Compatible
Watch & Reload
Module System
Max memory reload
Cluster Mode
Hot reload
Development workflow
Startup Scripts
Auto completion
Deployment workflow
Keymetrics monitoring
API
Try to run this command if you are using nohup -
nohup npm start 2>/dev/null 1>/dev/null&
You can also use forever to start server
forever start -c "npm start" ./
PM2 also supports npm start
pm2 start npm -- start
If you are running OSX, then the easiest way to produce a true system process is to use launchd to launch it.
Build a plist like this, and put it into the /Library/LaunchDaemons with the name top-level-domain.your-domain.application.plist (you need to be root when placing it):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>top-level-domain.your-domain.application</string>
<key>WorkingDirectory</key>
<string>/your/preferred/workingdirectory</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/node</string>
<string>your-script-file</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
</dict>
</plist>
When done, issue this (as root):
launchctl load /Library/LaunchDaemons/top-level-domain.your-domain.application.plist
launchctl start top-level-domain.your-domain.application
and you are running.
And you will still be running after a restart.
For other options in the plist look at the man page here: https://developer.apple.com/library/mac/documentation/Darwin/Reference/Manpages/man5/launchd.plist.5.html
I am simply using the daemon npm module:
var daemon = require('daemon');
daemon.daemonize({
stdout: './log.log'
, stderr: './log.error.log'
}
, './node.pid'
, function (err, pid) {
if (err) {
console.log('Error starting daemon: \n', err);
return process.exit(-1);
}
console.log('Daemonized successfully with pid: ' + pid);
// Your Application Code goes here
});
Lately I'm also using mon(1) from TJ Holowaychuk to start and manage simple node apps.
I use Supervisor for development. It just works. When ever you make changes to a .js file Supervisor automatically restarts your app with those changes loaded.
Here's a link to its Github page
Install :
sudo npm install supervisor -g
You can easily make it watch other extensions with -e. Another command I use often is -i to ignore certain folders.
You can use nohup and supervisor to make your node app run in the background even after you log out.
sudo nohup supervisor myapp.js &
Node.js as a background service in WINDOWS XP
Kudos goes to Hacksparrow at: http://www.hacksparrow.com/install-node-js-and-npm-on-windows.html for tutorial installing Node.js + npm for windows.
Kudos goes to Tatham Oddie at: http://blog.tatham.oddie.com.au/2011/03/16/node-js-on-windows/ for nnsm.exe implementation.
Installation:
Install WGET http://gnuwin32.sourceforge.net/packages/wget.htm via installer executable
Install GIT http://code.google.com/p/msysgit/downloads/list via installer executable
Install NSSM http://nssm.cc/download/?page=download via copying nnsm.exe into %windir%/system32 folder
Create c:\node\helloworld.js
// http://howtonode.org/hello-node
var http = require('http');
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("Hello World\n");
});
server.listen(8000);
console.log("Server running at http://127.0.0.1:8000/");
Open command console and type the following (setx only if Resource Kit is installed)
C:\node> set path=%PATH%;%CD%
C:\node> setx path "%PATH%"
C:\node> set NODE_PATH="C:\Program Files\nodejs\node_modules"
C:\node> git config --system http.sslcainfo /bin/curl-ca-bundle.crt
C:\node> git clone --recursive git://github.com/isaacs/npm.git
C:\node> cd npm
C:\node\npm> node cli.js install npm -gf
C:\node> cd ..
C:\node> nssm.exe install node-helloworld "C:\Program Files\nodejs\node.exe" c:\node\helloworld.js
C:\node> net start node-helloworld
A nifty batch goodie is to create c:\node\ServiceMe.cmd
#echo off
nssm.exe install node-%~n1 "C:\Program Files\nodejs\node.exe" %~s1
net start node-%~n1
pause
Service Management:
The services themselves are now accessible via Start-> Run->
services.msc or via Start->Run-> MSCONFIG-> Services (and check 'Hide
All Microsoft Services').
The script will prefix every node made via the batch script with
'node-'.
Likewise they can be found in the registry: "HKLM\SYSTEM\CurrentControlSet\Services\node-xxxx"
The accepted answer is probably the best production answer, but for a quick hack doing dev work, I found this:
nodejs scriptname.js & didn't work, because nodejs seemed to gobble up the &, and so the thing didn't let me keep using the terminal without scriptname.js dying.
But I put nodejs scriptname.js in a .sh file, and
nohup sh startscriptname.sh & worked.
Definitely not a production thing, but it solves the "I need to keep using my terminal and don't want to start 5 different terminals" problem.
June 2017 Update:
Solution for Linux: (Red hat). Previous comments doesn't work for me.
This works for me on Amazon Web Service - Red Hat 7. Hope this works for somebody out there.
A. Create the service file
sudo vi /etc/systemd/system/myapp.service
[Unit]
Description=Your app
After=network.target
[Service]
ExecStart=/home/ec2-user/meantodos/start.sh
WorkingDirectory=/home/ec2-user/meantodos/
[Install]
WantedBy=multi-user.target
B. Create a shell file
/home/ec2-root/meantodos/start.sh
#!/bin/sh -
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to 8080
npm start
then:
chmod +rx /home/ec2-root/meantodos/start.sh
(to make this file executable)
C. Execute the Following
sudo systemctl daemon-reload
sudo systemctl start myapp
sudo systemctl status myapp
(If there are no errors, execute below. Autorun after server restarted.)
chkconfig myapp -add
If you are running nodejs in linux server, I think this is the best way.
Create a service script and copy to /etc/init/nodejs.conf
start service: sudo service nodejs start
stop service: sudo service nodejs stop
Sevice script
description "DManager node.js server - Last Update: 2012-08-06"
author "Pedro Muniz - pedro.muniz#geeklab.com.br"
env USER="nodejs" #you have to create this user
env APPNAME="nodejs" #you can change the service name
env WORKDIR="/home/<project-home-dir>" #set your project home folder here
env COMMAND="/usr/bin/node <server name>" #app.js ?
# used to be: start on startup
# until we found some mounts weren't ready yet while booting:
start on started mountall
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
pre-start script
sudo -u $USER echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/log/$APPNAME.log
end script
script
# Not sure why $HOME is needed, but we found that it is:
export HOME="<project-home-dir>" #set your project home folder here
export NODE_PATH="<project node_path>"
#log file, grant permission to nodejs user
exec start-stop-daemon --start --make-pidfile --pidfile /var/run/$APPNAME.pid --chuid $USER --chdir $WORKDIR --exec $COMMAND >> /var/log/$APPNAME.log 2>&1
end script
post-start script
# Optionally put a script here that will notifiy you node has (re)started
# /root/bin/hoptoad.sh "node.js has started!"
end script
pre-stop script
sudo -u $USER echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/log/$APPNAME.log
end script
use nssm the best solution for windows, just download nssm, open cmd to nssm directory and type
nssm install <service name> <node path> <app.js path>
eg: nssm install myservice "C:\Program Files\nodejs" "C:\myapp\app.js"
this will install a new windows service which will be listed at services.msc from there you can start or stop the service, this service will auto start and you can configure to restart if it fails.
Use pm2 module. pm2 nodejs module
Since I'm missing this option in the list of provided answers I'd like to add an eligible option as of 2020: docker or any equivalent container platform. In addition to ensuring your application is working in a stable environment there are additional security benefits as well as improved portability.
There is docker support for Windows, macOS and most/major Linux distributions. Installing docker on a supported platform is rather straight-forward and well-documented. Setting up a Node.js application is as simple as putting it in a container and running that container while making sure its being restarted after shutdown.
Create Container Image
Assuming your application is available in /home/me/my-app on that server, create a text file Dockerfile in folder /home/me with content similar to this one:
FROM node:lts-alpine
COPY /my-app/ /app/
RUN cd /app && npm ci
CMD ["/app/server.js"]
It is creating an image for running LTS version of Node.js under Alpine Linux, copying the application's files into the image and runs npm ci to make sure dependencies are matching that runtime context.
Create another file .dockerignore in same folder with content
**/node_modules
This will prevent existing dependencies of your host system from being injected into container as they might not work there. The presented RUN command in Dockerfile is going to fix that.
Create the image using command like this:
docker build -t myapp-as-a-service /home/me
The -t option is selecting the "name" of built container image. This is used on running containers below.
Note: Last parameter is selecting folder containing that Dockerfile instead of the Dockerfile itself. You may pick a different one using option -f.
Start Container
Use this command for starting the container:
docker run -d --restart always -p 80:3000 myapp-as-a-service
This command is assuming your app is listening on port 3000 and you want it to be exposed on port 80 of your host.
This is a very limited example for sure, but it's a good starting point.
To round out the various options suggested, here is one more: the daemon command in GNU/Linux, which you can read about here: http://libslack.org/daemon/manpages/daemon.1.html. (apologies if this is already mentioned in one of the comments above).
Check out fugue! Apart from launching many workers, you can demonize your node process too!
http://github.com/pgte/fugue
PM2 is a production process manager for Node.js applications with a built-in load balancer. It allows you to keep applications alive forever, to reload them without downtime and to facilitate common system admin tasks.
https://github.com/Unitech/pm2
I am surprised that nobody has mentioned Guvnor
I have tried forever, pm2, etc. But, when it comes to solid control and web based performance metrics, I have found Guvnor to be by far the best. Plus, it is also fully opensource.
Edit : However, I am not sure if it works on windows. I've only used it on linux.
has anyone noticed a trivial mistaken of the position of "2>&1" ?
2>&1 >> file
should be
>> file 2>&1
I use tmux for a multiple window/pane development environment on remote hosts. It's really simple to detach and keep the process running in the background. Have a look at tmux
For people using newer versions of the daemon npm module - you need to pass file descriptors instead of strings:
var fs = require('fs');
var stdoutFd = fs.openSync('output.log', 'a');
var stderrFd = fs.openSync('errors.log', 'a');
require('daemon')({
stdout: stdoutFd,
stderr: stderrFd
});
If you are using pm2, you can use it with autorestart set to false:
$ pm2 ecosystem
This will generate a sample ecosystem.config.js:
module.exports = {
apps: [
{
script: './scripts/companies.js',
autorestart: false,
},
{
script: './scripts/domains.js',
autorestart: false,
},
{
script: './scripts/technologies.js',
autorestart: false,
},
],
}
$ pm2 start ecosystem.config.js
I received the following error when using #mikemaccana's accepted answer on a RHEL 8 AWS EC2 instance: (code=exited, status=216/GROUP)
It was due to using the user/group set to: 'nobody'.
Upon googling, it seems that using user/group as 'nobody'/'nogroup' is bad practice for daemons as answered here on the unix stack exchange.
It worked great after I set user/group to my actual user and group.
You can enter whomai and groups to see your available options to fix this.
My service file for a full stack node app with mongodb:
[Unit]
Description=myapp
After=mongod.service
[Service]
ExecStart=/home/myusername/apps/myapp/root/build/server/index.js
Restart=always
RestartSec=30
User=myusername
Group=myusername
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/home/myusername/apps/myapp
[Install]
WantedBy=multi-user.target
In case, for local development purpose, you need to start multiple instances of NodeJS app (express, fastify, etc.) then the concurrently might be an option. Here is a setup:
Prerequesites
Your NodeJS app (express, fastify, etc.) is placed at /opt/mca/www/mca-backend/app path.
Assuming that you are using node v.16 installed via brew install node#16
Setup
Install concurrently: npm install -g concurrently
Create a file ~/Library/LaunchAgents/mca.backend.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>mca.backend</string>
<key>LimitLoadToSessionType</key>
<array>
<string>Aqua</string>
<string>Background</string>
<string>LoginWindow</string>
<string>StandardIO</string>
<string>System</string>
</array>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/concurrently</string>
<string>--names</string>
<string>dev,prd</string>
<string>--success</string>
<string>all</string>
<string>--kill-others</string>
<string>--no-color</string>
<string>MCA_APP_STAGE=dev node ./server.mjs</string>
<string>MCA_APP_STAGE=prod node ./server.mjs</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key>
<string>/usr/local/opt/node#16/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin</string>
</dict>
<key>WorkingDirectory</key>
<string>/opt/mca/www/mca-backend/app</string>
<key>StandardErrorPath</key>
<string>/opt/mca/www/mca-backend/err.log</string>
<key>StandardOutPath</key>
<string>/opt/mca/www/mca-backend/out.log</string>
</dict>
</plist>
Load and run: launchctl bootstrap gui/`id -u` $HOME/Library/LaunchAgents/mca.backend.plist
Get a status: launchctl print gui/`id -u`/mca.backend
Stop: launchctl kill SIGTERM gui/`id -u`/mca.backend
Start/Restart: launchctl kickstart -k -p gui/`id -u`/mca.backend
Unload if not needed anymore: launchctl bootout gui/`id -u`/mca.backend
IMPORTANT: Once you are loaded service with launchctl bootstrap any changes you made in file
~/Library/LaunchAgents/mca.backend.plist won't be in action until you unload the service (by using launchctl bootout)
and then load it again (by using launchctl bootstrap).
Troubleshooting
See launchd logs at: /private/var/log/com.apple.xpc.launchd/launchd.log
This answer is quite late to the party, but I found that the best solution was to write a shell script that used the both the screen -dmS and nohup commands.
screen -dmS newScreenName nohup node myserver.js >> logfile.log
I also add the >> logfile bit on the end so I can easily save the node console.log() statements.
Why did I use a shell script? Well I also added in an if statement that checked to see if the node myserver.js process was already running.
That way I was able to create a single command line option that both lets me keep the server going and also restart it when I have made changes, which is very helpful for development.
I'm a student going into back-end development for the first time and are trying to learn Node.JS. I downloaded a pdf book about Node.JS from sitepoint called: "Jumpstart Node.JS". In following the instructions to set up the server on the command line, install the dependencies, and navigate to localhost:3000, i got nothing except the following message: "Connection refused: localhost:3000", Can somebody please tell me what might have went wrong and how to fix it?
Edit1:
The instructions i followed is about setting up a node.js server using the Node command line, thus no code, simply cmd commands, however, here is a quick summary of the process i followed:
Created an account on MongoLabs and then a database using the free pricing plan.
Installed express using the command: npm install -g express#.2.5.8.
Created an applications with default options using this command: express authentication.
modified the package.json file in system32
installed the dependencies by typing cd authentication, hitting enter, and then typing the command: npm install
Typed node app and hit enter.
According to the instructions i should have seen a message: "Welcome to express" but instead got the error message.
In following the instructions to set up the server on the command line, install the dependencies, and navigate to localhost:3000
It seems that you didn't start the server.
Somewhere between installing the dependencies and navigating to the URL you need to actually start the server if you want it to serve the request.
Check that there is no copy of the server running in the background, or that another app is using the port currently.
(Your firewall show allow you to see which app has been allocated to that port)
Because nodejs requires it to be the only app on that port running on your computer.
Also try a different port maybe?
I'm using this code to make an AJAX request:
$("#userBarSignup").click(function(){
$.get("C:/xampp/htdocs/webname/resources/templates/signup.php",
{/*params*/},
function(response){
$("#signup").html("TEST");
$("#signup").html(response);
},
"html");
But from the Google Chrome JavaScript console I keep receiving this error:
XMLHttpRequest cannot load
file:///C:/xampp/htdocs/webname/resources/templates/signup.php. Cross
origin requests are only supported for HTTP.
The problem is that the signup.php file is hosted on my local web server that's where all the website is run from so it's not cross-domain.
How can I solve this problem?
I've had luck starting chrome with the following switch:
--allow-file-access-from-files
On os x try (re-type the dashes if you copy paste):
open -a 'Google Chrome' --args -allow-file-access-from-files
On other *nix run (not tested)
google-chrome --allow-file-access-from-files
or on windows edit the properties of the chrome shortcut and add the switch, e.g.
C:\ ... \Application\chrome.exe --allow-file-access-from-files
to the end of the "target" path
If you’re working on a little front-end project and want to test it locally, you’d typically open it by pointing your local directory in the web browser, for instance entering file:///home/erick/mysuperproject/index.html in your URL bar. However, if your site is trying to load resources, even if they’re placed in your local directory, you might see warnings like this:
XMLHttpRequest cannot load file:///home/erick/mysuperproject/mylibrary.js. Cross origin requests are only supported for HTTP.
Chrome and other modern browsers have implemented security restrictions for Cross Origin Requests, which means that you cannot load anything through file:/// , you need to use http:// protocol at all times, even locally -due Same Origin policies. Simple as that, you’d need to mount a webserver to run your project there.
This is not the end of the world and there are many solutions out there, including the good old Apache (with VirtualHosts if you’re running several other projects), node.js with express, a Ruby server, etc. or simply modifying your browser settings.
However there’s a simpler and lightweight solution for the lazy ones. You can use Python’s SimpleHTTPServer. It comes already bundled with python so you don’t need to install or configure anything at all!
So cd to your project directory, for instance
1
cd /home/erick/mysuperproject
and then simply use
1
python -m SimpleHTTPServer
And that’s it, you’ll see this message in your terminal
1
Serving HTTP on 0.0.0.0 port 8000 ...
So now you can go back to your browser and visit http://0.0.0.0:8000 with all your directory files served there. You can configure the port and other things, just see the documentation. But this simply trick works for me when I’m in a rush to test a new library or work out a new idea.
EDIT:
In Python 3+, SimpleHTTPServer has been replaced with http.server. So In Python 3.3, for example, the following command is equivalent:
python -m http.server 8000
You need to actually run a webserver, and make the get request to a URI on that server, rather than making the get request to a file; e.g. change the line:
$.get("C:/xampp/htdocs/webname/resources/templates/signup.php",
to read something like:
$.get("http://localhost/resources/templates/signup.php",
and the initial request page needs to be made over http as well.
I was getting the same error while trying to load simply HTML files that used JSON data to populate the page, so I used used node.js and express to solve the problem. If you do not have node installed, you need to install node first.
Install express
npm install express
Create a server.js file in the root folder of your project, in my case one folder above the files I wanted to server
Put something like the following in the server.js file and read about this on the express gihub site:
var express = require('express');
var app = express();
var path = require('path');
// __dirname will use the current path from where you run this file
app.use(express.static(__dirname));
app.use(express.static(path.join(__dirname, '/FOLDERTOHTMLFILESTOSERVER')));
app.listen(8000);
console.log('Listening on port 8000');
After you've saved server.js, you can run the server using:
node server.js
Go to http://localhost:8000/FILENAME and you should see the HTML file you were trying to load
If you have nodejs installed, you can download and install the server using command line:
npm install -g http-server
Change directories to the directory where you want to serve files from:
$ cd ~/projects/angular/current_project
Run the server:
$ http-server
which will produce the message Starting up http-server, serving on:
Available on:
http://your_ip:8080 and
http://127.0.0.1:8080
That allows you to use urls in your browser like
http://your_ip:8080/index.html
It works best this way. Make sure that both files are on the server. When calling the html page, make use of the web address like: http:://localhost/myhtmlfile.html, and not, C::///users/myhtmlfile.html. Make usre as well that the url passed to the json is a web address as denoted below:
$(function(){
$('#typeahead').typeahead({
source: function(query, process){
$.ajax({
url: 'http://localhost:2222/bootstrap/source.php',
type: 'POST',
data: 'query=' +query,
dataType: 'JSON',
async: true,
success: function(data){
process(data);
}
});
}
});
});
REM kill all existing instance of chrome
taskkill /F /IM chrome.exe /T
REM directory path where chrome.exe is located
set chromeLocation="C:\Program Files (x86)\Google\Chrome\Application"
cd %chromeLocation%
cd c:
start chrome.exe --allow-file-access-from-files
change chromeLocation path with yours.
save above as .bat file.
drag drop you file on the batch file you created. (chrome does give restore pages
option though so if you have pages open just hit restore and it will work).
You can also start a server without python using php interpreter.
E.g:
cd /your/path/to/website/root
php -S localhost:8000
This can be useful if you want an alternative to npm, as php utility comes preinstalled on some OS' (including Mac).
For all python users:
Simply go to your destination folder in the terminal.
cd projectFoder
then start HTTP server
For Python3+:
python -m http.server 8000
Serving HTTP on :: port 8000 (http://[::]:8000/) ...
go to your link: http://0.0.0.0:8000/
Enjoy :)