-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.13.5 trying to create already existing record #3754
Comments
We encountered the same issue described by the OP after upgrading the Helm Chart to version 1.13.0 (external-dns v0.13.5) from 1.12.2 (external-dns v0.13.4): Pods entered CrashLoopBackOff after repeated failures to create records that already existed in the target Route53 Hosted Zone. Current working solution is to downgrade back to Helm Chart v1.12.2 (v0.13.4). |
Same issue with Google DNS. Downgrade to Helm Chart v1.12.2 (v0.13.4) works |
We are having the same issue, the pod straight up crashes from a normal error we see all the time, through multiple version upgrades over the years. |
This seems to be caused by this changes: #3009 |
What do you mean by |
|
Any update on this? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Bump |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
To ensure that the maintainers understand this is still an issue: it is. We did upgrade past v1.12.x, but it required a significant amount of record deletions in order to allow CoreDNS to recreate the records it had previously managed without issue. Now everything is working fine. If there is conflict in DNS we would expect the error message, but not the entire CoreDNS deployment to enter CrashLoopBackoff. |
Bump. |
I'm guessing this is fixed, at least for some providers, by #4166? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
What happened:
After update to v0.13.5, external-dns is having CrashLoopBackOff trying to create already existing DNS record in Route53 that it already manages.
What you expected to happen:
Not crashing.
How to reproduce it (as minimally and precisely as possible):
Probably:
Anything else we need to know?:
We have following DNS records for EKS LoadBalancer, which were created by external-dns v0.13.4:
After update to v0.13.5, external-dns trying to recreate them and fails:
Command line args (for both versions):
Environment:
external-dns --version
): v0.13.5The text was updated successfully, but these errors were encountered: