fix(cloudflare): handle 81058 identical record error for region migration#6090
fix(cloudflare): handle 81058 identical record error for region migration#6090AndrewCharlesHay wants to merge 15 commits intokubernetes-sigs:masterfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Pull Request Test Coverage Report for Build 20928128947Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
6de7f7d to
78a65aa
Compare
| } | ||
|
|
||
| func (p *CloudFlareProvider) resolveIdenticalRecordConflict(ctx context.Context, zoneID string, record dns.RecordResponse) error { | ||
| // To resolve the conflict, we need to find the existing record (likely Global) and delete it. |
There was a problem hiding this comment.
To me, there is an issue here.
ExternalDNS should never decide to delete a record created externally (by the user or other app).
This is not what most users will expect and may lead to severe production or security issue.
There was a problem hiding this comment.
@mloiseleur Thank you for the feedback.
I have updated the PR to put this behavior behind an explicit opt-in flag: --cloudflare-region-key-conflict-resolution.
- Default behavior: Log the conflict error (81058) and fail gracefully (no deletion), advising the user to enable the flag if desired.
- With Flag: Perform the lookup-and-delete resolution strategy to unblock the region-key record creation.
This ensures operators consciously opt-in to the potential destructiveness of replacing global records with regional ones.
There was a problem hiding this comment.
This PR introduces "lost ownership" problem. The external dns has a mechanism to identify the ownership. If record is not owned by the "external-dns" or there is a problem with the record - external-dns is not responsible for fixing it.
In this case, most likely, if there is an issue - log the problem, throw soft error and move on.
There was a problem hiding this comment.
I found this comment #5459 (comment). Could be relevant, that describes overl external-dns behaviour in similar cases.
There was a problem hiding this comment.
Probably worth to add to the docs somewhere, so it's super clear.
If we think, it is worth to challange status quo - most likely worth to start a discussion in the slack or short proposal https://github.com/kubernetes-sigs/external-dns/tree/master/docs/proposal
There was a problem hiding this comment.
I think we have something misconfigured on our system. External DNS is updating the region in place now. I'm going to close this PR
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What does it do ?
Solves #6091
Motivation
More