Skip to content

docs(proposal): annotations support graduation to beta#5080

Closed
ivankatliarchuk wants to merge 2 commits intokubernetes-sigs:masterfrom
gofogo:docs-proposal-annotations-graduate-to-beta
Closed

docs(proposal): annotations support graduation to beta#5080
ivankatliarchuk wants to merge 2 commits intokubernetes-sigs:masterfrom
gofogo:docs-proposal-annotations-graduate-to-beta

Conversation

@ivankatliarchuk
Copy link
Copy Markdown
Member

@ivankatliarchuk ivankatliarchuk commented Feb 9, 2025

Description

No issues to date. But there is definitely an ask and appetite to improve annotation processing. It may take a while but worth to have a plan in place, so community would be able to help to improve annotation processing.

Currently 10 open pull requests 10 open https://github.com/kubernetes-sigs/external-dns/pulls?q=is%3Apr+is%3Aopen+annotation

And 40+ open https://github.com/kubernetes-sigs/external-dns/issues?q=is%3Aissue%20state%3Aopen%20annotations

Mane goals

  • worth to have/initiate at least a discussion
  • standard interface and we could have some automation to generate docs from code
  • easy to add/deprecate/graduate an annotation or group of annotations

The proposal to merge to master 2025-Mar-09 with the decision

Checklist

  • Unit tests updated
  • End user documentation updated

Signed-off-by: ivan katliarchuk <ivan.katliarchuk@gmail.com>
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 9, 2025
Signed-off-by: ivan katliarchuk <ivan.katliarchuk@gmail.com>
@ivankatliarchuk
Copy link
Copy Markdown
Member Author

/label tide/merge-method-squash

@k8s-ci-robot k8s-ci-robot added the tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. label Feb 9, 2025
Copy link
Copy Markdown
Contributor

@szuecs szuecs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the effort I think this can enhance safety and usability of external-dns!

I think there are some things clearly missing:

  1. How do we make sure that annotation graduation has a level of objective quality increase?
  2. How do we handle this with providers that are out of tree, which is one of our top priorities?

As long we have no good answers to these questions we should not invest time into an implementation.


- **Deprecation Policy and Migration Path**: Establish a clear deprecation policy for outdated annotations. Implement mechanisms to log warnings when deprecated annotations are used and provide comprehensive migration guides to assist users in transitioning to supported annotations.

- **Conflict Detection and Resolution**: Enhance the annotation processing logic to detect conflicting annotations proactively. Implement validation rules that either prevent conflicts at the time of deployment or resolve them in a predictable manner, ensuring consistent behavior.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you suggest to implement a validation webhook or something else?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not exactly. Validation webhook could be a future. At the moment annotations in external-dns just a string, but could be a struct, which could have a simple validation logic. If validation fail, we not processing annotation. Something basic


- No automated documentation for available annotations.
- Unclear strategy for supporting different API versions.
- No defined transition path from `external-dns.alpha.kubernetes.io` annotations to a stable format.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, maybe we should define alpha/beta/stable groups.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could follow same path as for CRD, at the moment just to standartise annotation processing. I will have a look around, most likely this groups are already provided by kubernetes itself.

- No defined transition path from `external-dns.alpha.kubernetes.io` annotations to a stable format.
- Lack of standardization among annotations.

By adopting a structured approach similar to `ingress-nginx`, we can address these issues and improve the overall functionality and user experience of `external-dns`.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you link or tell more about their approach? I did not find anything browsing their GH repo

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly it. All annotaions well known, each annotations grouped (this could be part of implementation) contains validator and documentation, as well as each providers or sources supported. So we could have autoamted documentation from the code directly.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add this to documentation


- Introduce automated documentation for supported annotations.
- Define a strategy for handling multiple API versions in annotations.
- Ensure backward compatibility where possible.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Backward compatibility is the most important IMO, so I would drop the "where possible".
I think a minor release version increase and a deprecation note for a lifecycle change.
I would suggest something like: alpha -> alpha+beta -> beta -> beta + stable -> stable and each "->" is one minor version increase. So I would suggest that a controller update will have both working "alpha+beta", but log warning for "alpha" annotations in the group of "alpha+beta" graduation step. After all annotations a migrated to "beta" in a cluster that are read by external-dns instance there is no warning log and the next controller update to "beta" is fine.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add this to the decision


### Non-Goals

- Establish a migration plan from `external-dns.alpha.kubernetes.io` to `external-dns.beta.kubernetes.io`.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should have a clear idea how to execute the change, so I would like to have it also as goal.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll rename proposal. Let's leave migration for a future. At the moment code is tightly coupled with annotations in sources, so clear migration will be a lot of code changes + support different versions which may doulbe the size of codebase.

- _As a maintainer_, I want to define and communicate a clear lifecycle for annotation versions, so that contributors and users understand when alpha annotations will be deprecated and how to migrate.
- _As a maintainer_, I want to ensure that annotation behavior is consistent across supported DNS providers (e.g., AWS Route 53, Cloudflare, oogle DNS), so that users do not experience unexpected inconsistencies depending on their provider.
- _As a maintainer_, I want to establish validation rules that reject conflicting or redundant annotations at runtime, so that users do not face unpredictable behavior due to overlapping DNS rules.
- _As a maintainer_, I want to collaborate with other Kubernetes SIGs (e.g., SIG-Network, SIG-Auth) to align annotation standards,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. We are a project of sig-network so "other Kubernetes SIGs" is not sig-network.
  2. Where do you see collaboration needs? Is there any trial to standardize annotations?


### Behavior

- `external-dns` should recognize both `alpha` and `beta` annotations where applicable.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does it exactly mean?


- `external-dns` should recognize both `alpha` and `beta` annotations where applicable.
- Warnings should be logged when deprecated annotations are used.
- Warnings should be logged when annotation is not supported by source or provider.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the provider is out of tree we don't know their deployed version nor if an annotation is supported or exists.
How do you make sure it works?

- `external-dns` should recognize both `alpha` and `beta` annotations where applicable.
- Warnings should be logged when deprecated annotations are used.
- Warnings should be logged when annotation is not supported by source or provider.
- Future major versions should drop support for `alpha` annotations after a defined period.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to use minor versions instead of major versions. In Go major versions are breaking changes for exposed functions/types/... and if we have 50 annotations and increase in small groups we will soon have v50, which IMO makes no sense.

### Alternative 2: Keep Annotations in Alpha Permanently

- Pros: No migration burden for users.
- Cons: Lack of stability signals to users, discouraging adoption.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how we can really make sure that alpha is better than beta. What is the signal of quality to use to make this change?

@ivankatliarchuk
Copy link
Copy Markdown
Member Author

Not abandonned. Still working out things

@ivankatliarchuk ivankatliarchuk mentioned this pull request Mar 17, 2025
2 tasks
@ivankatliarchuk ivankatliarchuk marked this pull request as draft April 24, 2025 07:05
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 24, 2025
@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 23, 2025
@ivankatliarchuk
Copy link
Copy Markdown
Member Author

Still working on ir

@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 23, 2025
@ivankatliarchuk
Copy link
Copy Markdown
Member Author

Still WIP

@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

@k8s-triage-robot: Closed this PR.

Details

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ivankatliarchuk
Copy link
Copy Markdown
Member Author

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Oct 11, 2025
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

@ivankatliarchuk: Reopened this PR.

Details

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign mloiseleur for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coveralls
Copy link
Copy Markdown

Pull Request Test Coverage Report for Build 18428296455

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 78.581%

Totals Coverage Status
Change from base Build 18416862229: 0.0%
Covered Lines: 15845
Relevant Lines: 20164

💛 - Coveralls

@k8s-triage-robot
Copy link
Copy Markdown

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

@k8s-triage-robot: Closed this PR.

Details

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. docs lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants