Complete support for dynamic container names#231
Conversation
Because a new container could have been redeployed during the companion’s lifecycle.
|
Maybe the commits should be squashed to have a cleaner commit history and no doing-undoing commits ? Otherwise nice PR, I hope it'll be merged. Even outside of swarm and docker cloud it makes more sense this way than having a label on one container and env variables on the other. |
|
Agreed, but that's done when merging. Afterwards, my fork will be deleted. 😺 |
README.md
Outdated
| ``` | ||
|
|
||
| 2) Set the `NGINX_DOCKER_GEN_CONTAINER` environment variable to the name or id of the docker-gen container. | ||
| 2) Use the `com.github.jrcs.letsencrypt_nginx_proxy_companion.docker_gen=true` label on the docker-gen container, or explicitly set the `NGINX_DOCKER_GEN_CONTAINER` environment variable to the name or id of that container. |
There was a problem hiding this comment.
There's no need for =true. Just a plain label is enough. It doesn't have to be a key-value pair.
There was a problem hiding this comment.
Actually there is, because it defaults to an empty string and I kept the logic from #181 where it checks for the value "true": jq -r '.[] | select( .Labels["'$1'"] == "true")|.Id'
We would have to change this query, I'm not opposed to it.
There was a problem hiding this comment.
Actually I think it's better to disregard the value, since it doesn't matter: jq -r '.[] | select( .Labels["'$1'"])|.Id' 👍
There was a problem hiding this comment.
Actually, it's incredibly inefficient to fetch the list of all containers. The Docker Engine API allows filtering by labels.
There was a problem hiding this comment.
I think we can do that in another PR.
For reference, this is my modified version of the docker_api function: https://github.com/tripviss/tripviss-nginx-proxy/blob/0028116697aef28e9e7d5fec98fc791194e98e2b/docker/letsencrypt-companion/functions.sh#L31-L59
(really should get my changes merged upstream lol)
There was a problem hiding this comment.
It's better for performance but also makes it easier to filter by simply a label or label=value.
I improved my docker-label-sighup, based on this.
It's cooler now, because it allows me to do this: -notify "docker-label-sighup com.docker.swarm.service.name=frontend_nginx"
So I can reuse the labels that already exist.
The mere existence of the label is enough.
|
Works great, thanks! When will this be merged? |
|
The communication is not working if you stack deploy a container on another node within the swarm. Has this problem been resolved? manage node is running nginx-letsencrypt server using deploy with contraints to this manage node.
If I create two docker-gen service in the same deployment, one for manager and worker. this acknowledges container start detection in workder node, but the process returns error shown below on worker node: so basically, running everything in the same node works fine. but if proxy and application container are in different node within Swarm it doesn work. Here is the frontend yml version: '3.2'
services:
nginx: #manger node- only one instance
image: nginx:alpine
ports:
- "80:80"
- "443:443"
#volumes are pointed to nfs, all node has access to these files.
volumes:
- ./nginx-conf/etc/conf.d:/etc/nginx/conf.d
- ./nginx-conf/etc/vhost.d:/etc/nginx/vhost.d
- ./nginx-conf/usr/share/nginx/html:/usr/share/nginx/html
- ./nginx-conf/var/letsencrypt/certs:/etc/nginx/certs:ro
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
deploy:
placement:
constraints: [node.id == rxyh541dpy2wnucsvif1cvcn6]
dockergen: #manager node
image: helder/docker-gen
command: -notify "docker-label-sighup com.docker.swarm.service.name=frontend_nginx" -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes:
- ./nginx-conf/etc/conf.d:/etc/nginx/conf.d
- ./nginx-conf/etc/vhost.d:/etc/nginx/vhost.d
- ./nginx-conf/var/docker-gen/templates/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl
- ./nginx-conf/var/letsencrypt/certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
deploy:
placement:
constraints: [node.id == rxyh541dpy2wnucsvif1cvcn6]
dockergen-rm1: #worker node
image: helder/docker-gen
command: -notify "docker-label-sighup com.docker.swarm.service.name=frontend_nginx" -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes:
- ./nginx-conf/etc/conf.d:/etc/nginx/conf.d
- ./nginx-conf/etc/vhost.d:/etc/nginx/vhost.d
- ./nginx-conf/var/docker-gen/templates/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl
- ./nginx-conf/var/letsencrypt/certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.docker_gen
deploy:
placement:
constraints: [node.id == rbom24xvsp57b0dwe3064xekm]
letsencrypt: #manager node
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
#- /var/run/docker.sock:/var/run/docker.sock:ro
- ./nginx-conf/etc/conf.d:/etc/nginx/conf.d
- ./nginx-conf/etc/vhost.d:/etc/nginx/vhost.d
- ./nginx-conf/var/letsencrypt/certs:/etc/nginx/certs
- ./nginx-conf/usr/share/nginx/htmlt:/usr/share/nginx/html
deploy:
placement:
constraints: [node.id == rxyh541dpy2wnucsvif1cvcn6]
letsencrypt-rm1: #worker node
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./nginx-conf/etc/conf.d:/etc/nginx/conf.d
- ./nginx-conf/etc/vhost.d:/etc/nginx/vhost.d
- ./nginx-conf/var/letsencrypt/certs:/etc/nginx/certs
- ./nginx-conf/usr/share/nginx/htmlt:/usr/share/nginx/html
deploy:
placement:
constraints: [node.id == rbom24xvsp57b0dwe3064xekm]
environment:
NGINX_DOCKER_GEN_CONTAINER: "dockergen"
NGINX_PROXY_CONTAINER: "nginx"
networks:
default:
external:
name: proxyHere is the application yml. version: '3'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=secret_root_pass
- MYSQL_PASSWORD=secret_user_pass
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
deploy:
placement:
constraints: [node.id == rbom24xvsp57b0dwe3064xekm] #worker node
app:
image: nextcloud
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- VIRTUAL_HOST=www.doamin.com
- LETSENCRYPT_HOST=www.domain.com
- LETSENCRYPT_EMAIL=name@email.com
restart: always
deploy:
placement:
constraints: [node.id == rbom24xvsp57b0dwe3064xekm] #worker node
networks:
default:
external:
name: proxy |
This is my attempt to solve the same #205 issue.
In Swarm Mode and Docker Cloud, container names are dynamic. #181 solved this partially with a label on
jwilder/nginx-proxybut not if they're separate containers. #126 tries a different approach, but I think using a label is a better one.This PR simply adds a
com.github.jrcs.letsencrypt_nginx_proxy_companion.docker_gen=truelabel to the docker-gen container.It also moves the label related detection to runtime (functions.sh: nginx-reload) instead of startup time (entrypoint.sh), because the nginx and docker-gen containers could be redeployed during the lifecycle of the letsencrypt companion, and thus, getting different ids.
There's also a new assumption in that I think it's simpler to just recommend using the label instead of
volumes-fromto detect the nginx-proxy container. The reason is that in some environments (like mine), you can't usevolumes-from, but I think you can always use labels.Note: If you're wandering why I've chosen not to set the variables
NGINX_DOCKER_GEN_CONTAINERandNGINX_PROXY_CONTAINER, that's because we can use them to make sure an explicitly set container name takes precedence over a label.Now I'm running a successful separate containers deployment in Swarm Mode, here's my stack:
As you can see, I'm also using a custom
helder/docker-genimage which simply adds adocker-label-sighupscript that allows me to reload the nginx container from a label, because... well, dynamic names.jwilder/docker-gendoesn't support reloading containers with dynamic names (yet).