Introduction
In a previous blog post, I a to run nginx as a reverse proxy, with automatically updated Let’s Encrypt SSL certificates that is entirely depends on several Docker images that were originally developed by Jason Wilder, eliminating the need for the shell scripts and the cron job. The solution uses a nginx-proxy
Docker container in combination with the acme-companion
Docker container, to obtain the Let’s Encrypt Certificate and configure a Nginx server operating in reverse proxy mode.
The Problem
An essential component of the nginx-proxy and acme-companion solution is the ability of these container to monitor what other docker containers are running by having access to docker socket on the host machine. if you pay attention to mounted volumes, you would see that the host’s /var/run/docker.sock is mounted on both containers, giving those containers access to any other docker container running on the same host, and also giving the container access to a resource running on the host machine.
What could possibly go wrong?
Well, the Docker socket (/var/run/docker.sock
) is essentially the Unix socket that the Docker daemon listens to for API requests. By mounting this socket inside a container, you allow that container to communicate directly with the Docker daemon on the host system. With access to this socket, a process inside the container can execute Docker commands as if it were the Docker client running on the host. This means the container can:
- Start, stop, and manage other containers.
- Pull images from registries.
- Build new images.
- Inspect containers and potentially extract sensitive information (environment variables, secrets, etc.).
- Mount host directories into new containers.
- Execute commands on the host itself by starting privileged containers.
- Mount any part of the host’s filesystem (e.g.,
/
,/etc
,/var
, etc.) into a new container, giving it read and write access to sensitive areas of the host system. Basically it can modify or delete critical system files, install or remove software, or even exfiltrate sensitive data.
So basically, we are giving a container the possibility to gain root access on the host machine. A malicious container could run that image in a new privileged container that has root access on the host… isn’t that the worst case scenario?
Could there be any worse scenario?
Well yes…
A Docker container of an audited image that is not malicious that is internet facing, such as Nginx, or nginx-proxy, could be hacked due to some vulnerability giving the attacker a foothold on a machine that has access to the host’s /var/run/docker.socket. In that way, the attacker can now start a privileged container running an image already available on the system giving the attacker root access on the host machine.
A misconception is that mounting the socket in “read-only” mode would solve the problem. That will prevent that container from running commands that would delete the socket or modified it, however Docker API calls to the docker daemon on the host machine are not affected by the read-only status of the mounted socket, and that is the biggest problem, that container will still be able to the same Docker API calls that will run another container with the privileged access to the host.
OWASP recommendation for Docker security mentions this issue in Rule #1 in their Docket Security cheatssheet.
The slightly more secure solution
I say slightly because in the configuration discussed below, we are still giving two containers access to the docker socket. However, the containers with those access are not exposing ports to the internet and thus less susceptible to the scenario where an attacker manages to get a foot-hold on the internet-facing nginx-container and with that manages to compromise the host machine.
This configuration was suggested by the maintainers of the nginx-proxy repository on Github, and I am only sharing it to make sure that if you decided to use the a solution based on nginx-proxy, you use the slightly more secure version, until someone comes up with an even better solution.
Here is the docker-compose.yml
file I came up with and placed it in the folder: ~/nginx-proxy/
---
services:
nginx:
image: nginx:alpine
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- vhostd:/etc/nginx/vhost.d
- conf:/etc/nginx/conf.d
- html:/usr/share/nginx/html
- certs:/etc/nginx/certs:ro
networks:
- proxy
nginx-proxy-gen:
image: nginxproxy/docker-gen
container_name: nginx-proxy-gen
restart: always
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl
/etc/nginx/conf.d/default.conf
volumes_from:
- nginx
volumes:
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.nginx-proxy.docker-gen"
networks:
- proxy
letsencrypt-nginx-proxy-companion:
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
restart: always
environment:
DEFAULT_EMAIL: ${LETSENCRYPT_EMAIL}
NGINX_DOCKER_GEN_CONTAINER: nginx-proxy-gen
volumes_from:
- nginx
volumes:
- certs:/etc/nginx/certs:rw
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- proxy
volumes:
vhostd:
certs:
conf:
html:
acme:
networks:
proxy:
you will need to download the nginx.tmpl from https://raw.githubusercontent.com/nginx-proxy/nginx-proxy/main/nginx.tmpl
Now the web-app backend, let’s say matomo
, can go to a different docker-compose.yml file in the folder ~/matomo/
:
---
services:
matomo:
image: matomo
container_name: matomo
restart: always
environment:
VIRTUAL_HOST: ${THE_DOMAIN_NAME}
VIRTUAL_PORT: ${THE_PORT_MATOMO_IS_LISTENING_FOR}
LETSENCRYPT_HOST: ${THE_DOMAIN_NAME}
LETSENCRYPT_EMAIL: ${THE_EMAIL_USED_FOR_LETS_ENCRYPT}
depends_on:
- db
volumes:
- matomo:/var/www/html
networks:
- proxy
- matomo
db:
image: mariadb
container_name: db
command: --max-allowed-packet=64MB
restart: always
environment:
- MARIADB_DATABASE=matomo
- MARIADB_USER
- MARIADB_PASSWORD
- MARIADB_ROOT_PASSWORD
volumes:
- db:/var/lib/mysql
networks:
- matomo
networks:
matomo:
proxy:
external: True
name: nginx-proxy_proxy
and of course the variables need to be defined in the ~/matomo/.env
file:
$ cat ~/matomo/.env
MARIADB_USER=matomo
MARIADB_PASSWORD=some_good_password
MARIADB_ROOT_PASSWORD=some_other_good_password
DOMAIN=thedomain
LETSENCRYPT_EMAIL=it@thedomain
Conclusion
The solution presented in the previous blog post, expose the container that has access to the Docker daemon as running as root on the host machine via the mounted socket /var/run/docker.sock
file. Giving hackers the opportunity to compromise the host machine and gain root access if they managed to hack into that container.
The solution presented above reduces that risk by exposing the an alpine Nginx container, while it does not expose containers that has access to the Docker daemon running on the host machine. If the attacker managed to compromise the Nginx container, the attacker would still have to find a way to break out to the host machine.
This reduces that specific risk, but does not address the fact that two containers still have full access to the host machine, and if the container running on those machines could be laveraged, an attacker, or a malicious party could take full control over the host machine.
Therefore, this solution, and this blog post, will serve as a reminder about what could go wrong when using nginx-proxy or similar methods. It also constitutes a first step toward a more secure configuration. Stay tuned.