-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
What are you trying to do?
I am sharing the tailscale socket into a caddy container via a docker volume. If I restart the tailscale process on the host (e.g on a tailscale update) The /var/run/tailscale directory is removed and thus my docker mount invalidated until I restart the caddy container. Example docker compose snippet:
version: "3.8"
services:
caddy:
container_name: caddy
restart: always
image: jumager/caddy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/run/tailscale:/var/run/tailscale
- /run/containers:/run/containers
- ./data:/data
- ./config:/config
ports:
- 80:80
- 443:443
- 443:443/udp
- 127.0.0.1:2019:2019
environment:
- CADDY_INGRESS_NETWORKS=caddy
- CADDY_DOCKER_CADDYFILE_PATH=/config/Caddyfile
- CADDY_HOST=test.example.org
- CADDY_TAILNET_HOST=test.tailXXXXX.ts.net
- CADDY_REDIS_HOST=redis.tailXXXXX.ts.net
How should we solve this?
If the tailscaled.service would set the option:
RuntimeDirectoryPreserve=yes
the /var/run/tailscale directory would persist and upon the daemon creating a new socket there the docker container would be able to reconnect to the tailscale socket without restarting the container to remount the volume.
What is the impact of not solving this?
I have to restart my caddy container and thus disrupting any services even though I only need the tailnet in the caddy configuration for administrative services supposed to be used only from trusted tailnet users. I am using the tailscaled socket to acquire ssl certificates for services running accessible via the tailnet api, for example the watchtower API. A more detailed write up how I configure my systems can be found here:
https://github.com/jum/caddy-docker-proxy-redis
Anything else?
Thanks for the great software!