-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reloading consul certs #17297
Comments
Hi @juananinca 👋 I believe CA's behaviour is the expected one in this case. This is the code for the Nomad agent configuration reload: Lines 1286 to 1355 in da9ec8c
As mentioned in the docs, only Nomad's own TLS configuration is reloaded. Are CA and CB configuration identical? Both for Nomad and Consul? I could imagine this happening if one of the agents is configure to ignore TLS certs. |
Sorry for the delay. Yes, both configs are the same.
And the nomad config, in this case the only difference are the
As you can see the |
I'm also experiencing a similar error in 1.8.1. Does not happen very often given the cert rotation is not very high, but it manifests on some allocations and clients a few days after certificates are renewed (using consul-template for this).
Then evolves to some allocations failing (and don't restart) with errors in the connect sidecar, like:
It recovers after Consul and Nomad are both restarted in the affected client. Haven't been able to figure out why... |
Nomad version
Nomad v1.5.0
BuildDate 2023-03-01T10:11:42Z
Revision fc40c49
Operating system and Environment details
NAME="Oracle Linux Server"
VERSION="8.7"
Issue
I setup a Nomad cluster with Consul consisting in a few clients and one single server. Both Nomad and Consul are secured using mutual tls generated by Vault's PKI secret engine and rotating them with a TTL of 1h using consul-template in each node, just like in this tutorial https://developer.hashicorp.com/nomad/tutorials/integrate-vault/vault-pki-nomad, (the vault service is not running within this cluster). After every rotation consul-template sends a SIGHUP to both nomad and consul through a
systemctl reload
.While I was testing the cluster I found out that one of the clients (let's call it CA) was unable to register any service into consul altough if I tried to run the same job in another client (let's call it CB) there was no problem registering it. Besides I noticied in the nomad's log from CA that it was unable to get Consul's checks:
{"@level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-23T00:52:25.791875+02:00","error":"Get \"https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}
After restarting nomad in CA, all jobs run in that client are registered into Consul and the bad certificate error trying to get Consul checks is gone until the expiration time comes and then I am at the starting poing once again, unable to register new services from CA and bad certificates errors in the nomad logs. What is weird is that CB keeps registering and communicating with consul even after certs are rotated without any restart of the nomad service.
I headed for the documentation (https://developer.hashicorp.com/nomad/docs/configuration#configuration-reload) and it made sense to me (kind of).
tls: note this only reloads the TLS configuration between Nomad agents (servers and clients), and not the TLS configuration for communication with Consul or Vault.
This perfectly explains the CA's behaviour regarding the Consul communication, but not the CB's.
Who's right and who's wrong? Is CA acting as it is supposed to? And what about CB?
Note: I double checked the certs expiration time from CB, by copying them (thus they aren't rotated) at the moment I restart nomad which they are valid and waiting until they are expired. Once they are expired I use them to curl the consul check for instance
https://localhost:8500/v1/agent/checks
and I get an bad cert error, but nomad is still requesting them without any error and no need to restart the service just sending a SIGHUP signal by systemctl reload nomad.Reproduction steps
Expected Result
Not sure
Actual Result
Clients behaving different under same conditions
Job file (if appropriate)
Nomad Server logs (if appropriate)
Nomad Client logs (if appropriate)
{"@Level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:10.809608+02:00","error":"Get "https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}
{"@Level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:12.837512+02:00","error":"Get "https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}
{"@Level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:14.866487+02:00","error":"Get "https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}
{"@Level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:18.923713+02:00","error":"Get "https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}
{"@Level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:20.953657+02:00","error":"Get "https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}
{"@Level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:22.983925+02:00","error":"Get "https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}
{"@Level":"error","@message":"failed to retrieve check statuses","@module":"watch.checks","@timestamp":"2023-05-22T12:37:25.010972+02:00","error":"Get "https://127.0.0.1:8500/v1/agent/checks\": remote error: tls: bad certificate"}
The text was updated successfully, but these errors were encountered: