Skip to content

The order of the CRDs version are processed is not deterministic #1068

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jvanz opened this issue Sep 27, 2024 · 8 comments
Closed

The order of the CRDs version are processed is not deterministic #1068

jvanz opened this issue Sep 27, 2024 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jvanz
Copy link

jvanz commented Sep 27, 2024

I'm trying to add the shortNames configuration to some CRDs. These CRDs have two versions: v1 and v1alpha2. I have changed the // +kubebuilder:resource:scope=Cluster,shortName=cap by adding the shortName field in the v1 CRDs. However, each time I run controller-gen, the resulting CRD definitions are different. On each run, the shortNames configuration is added to some CRDs but not to others.

After downloading the controller-tools repository, adding some debug messages, and building controller-gen locally, I found the issue. It turns out that every time controller-gen runs, the order in which it processes the CRD versions changes. Therefore, when the v1alpha2 CRD is the last one to be processed, the generated CRD definition misses the shortNames configuration. To fix this, I need to define the shortName field in the kubebuilder marker for both versions.

Is this behavior expected? Shouldn't controller-gen consider this marker only for the latest version?

@JoelSpeed
Copy link
Contributor

Is this behavior expected? Shouldn't controller-gen consider this marker only for the latest version?

The latest is hard to define, since technically the version string can be anything and doesn't have to follow the v1alpha1, v1beta1, v1 pattern we typically see in Kube.

However, perhaps pinning this to the storage version would make sense, since that always has to be present and gives a singular answer to the version without having to write some complex logic internally to guess which is the latest

@sbueringer
Copy link
Member

sbueringer commented Oct 1, 2024

I wonder if we should simpy return an error if the // +kubebuilder:resource marker is defined multiple times for the same API type and otherwise only use the one occurence of the marker that we can find.

@jvanz
Copy link
Author

jvanz commented Oct 10, 2024

However, perhaps pinning this to the storage version would make sense, since that always has to be present and gives a singular answer to the version without having to write some complex logic internally to guess which is the latest

I like the idea of using the storage version to that.

I wonder if we should simpy return an error if the // +kubebuilder:resource marker is defined multiple times for the same API type and otherwise only use the one occurence of the marker that we can find.

If we do that, won't we miss the v1alpha2 definition in the final CRDs spec? I mean, if I want to keep both version in the CRD definition, which afaics is the current behavior, won't this change prevent users from doing that?

@sbueringer
Copy link
Member

sbueringer commented Oct 19, 2024

If we do that, won't we miss the v1alpha2 definition in the final CRDs spec?

No, if you define the resouce marker only on one of your Go types all versions are still included in the CRD.

The entire resource marker (

type Resource struct {
) consists of fields that only exist once on the CRD resource.

So I don't think it makes sense to allow it to be configured multiple times and then pick one of the markers that is then used for the CRD. Just like storage version it should be only defined once

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 17, 2025
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 16, 2025
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants