-
Notifications
You must be signed in to change notification settings - Fork 740
etcd cluster startup flakiness with TLS enabled #2163
Comments
@matte21 Can you provide any details about what certs etcd-operator generated in this case? Do you know what dns names were in the Subject Alternative Names? |
The etcd cluster members used static TLS:
The certificates and the secrets carrying them where created by a deployment script. Follow the cnf for the three certs. Peer:
Server:
Client:
|
As an addition: after the second member of the etcd cluster got stuck in failed state, the etcd operator logs contained:
|
On @MikeSpreitzer 's prompt, here's a list of issues for whom the observed behavior was similar to the one described in this issue: |
Is there a solution for this issue? |
@myazid not that I know of. Are you experiencing it? If so, could you post more details? |
I saw this error in the logs
ectd did a reverse lookup by IP and I test lookup by IP
domain is not in the DNSNames. Server TLS define etcd-operator: 0.9.4 Running on K8s v1.17.0 |
Observation: I was experiencing the same problem. However, when I configured PVCs, the etcd pod would take significant time (~50s) to start after it had been spawned. In this case, the DNS name mismatch did not occur. |
Reported by @matte21 on kubernetes/kubernetes#81508 (comment):
cc @MikeSpreitzer @hexfusion
The text was updated successfully, but these errors were encountered: