-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add v1beta1 API #532
Add v1beta1 API #532
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alculquicondor The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
2db8d86
to
ff72832
Compare
/cc |
|
||
type ResourceGroup struct { | ||
// resources is the list of resources covered by the flavors in this group. | ||
Resources []corev1.ResourceName `json:"resources"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The field has the same name as the one in FlavorQuota but a different type. Maybe one of them should have a different name? It looks a bit weird in yaml.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's ok. They are in different contexts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 with @mwielgus
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any changes compared to v1alpha1 other than clusterqueue_types.go
and the queue-name label?
// If the ClusterQueue belongs to a cohort, the sum of the quotas for each | ||
// (flavor, resource) combination defines the maximum quantity that can be | ||
// allocated by a ClusterQueue in the cohort. | ||
Quota resource.Quantity `json:"quota"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following up on @liggitt suggestion, we can use AssuredLimit
for absolute value, and in the future we can add AssuredLimitShares
or AssuredLimitPercentage
(depending on the semantics we want to enable) when we build hierarchal quota.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The source for this suggestion was API priority and fairness.
In alpha they used AssuredConcurrencyShares
https://github.com/kubernetes/kubernetes/blob/f97d14c6c88e92cb505f8a9147294705a93247e3/staging/src/k8s.io/api/flowcontrol/v1alpha1/types.go#L437
But in beta they are using NominalConcurrencyShares
https://github.com/kubernetes/kubernetes/blob/f97d14c6c88e92cb505f8a9147294705a93247e3/staging/src/k8s.io/api/flowcontrol/v1beta3/types.go#L471
Definition of nominal:
stated or expressed but not necessarily corresponding exactly to the real value.
It matches our intent, don't you think?
|
||
// Total is the total quantity of the resource used, including resources | ||
// borrowed from the cohort. | ||
Total resource.Quantity `json:"total,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here seems to be a bit of inconsistency. In spec we have base + borrowed. Here is total + borrowed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good point, but I think it's rather useful:
If an administrator considers increasing the quota, it's useful for them to directly see the total usage.
And observing the borrowed quota is useful for adjusting borrowingLimit.
// name of the resource | ||
Name corev1.ResourceName `json:"name"` | ||
|
||
// Total is the total quantity of the resource used, including resources |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about resources borrowed BY the cohort?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot quantify those. A ClusterQueue doesn't directly borrow from another ClusterQueue. They borrow from the pool of unused resources in the Cohort.
This is so that a single Workload can use borrowed quota from multiple CQs at the same time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we kind-of do. Imagine the situation when all capacity in the cohort is taken and the queue itself has no admitted workloads. Then borrowedByCohort = base queue quota. If All queues have low usage, within their quota then borrowedByCohort = 0. Right :)? So we can generalize it to:
max(0, SumOfUsageInOtherQueuesInCohort - SumOfQuotaInOtherQuesesInCohort)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, but it can be added later. Not sure how much worth it is. We don't maintain those numbers in the cache. We just calculate them when we snapshot the cache to run the admission cycle.
type ResourceFlavor struct { | ||
metav1.TypeMeta `json:",inline"` | ||
metav1.ObjectMeta `json:"metadata,omitempty"` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why no Spec and Status?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So far we haven't found a case for Status, so we skipped it for simplicity. Can you think of something?
This is similar to how PriorityClasses are defined https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we will have a problem IF we find a use case for status (like resource depleted notification or so). Without Spec and Status we will have to do another api version, deprecation, migration etc just to have that bit of info exposed. Empty Status doesn't cost us much at this moment, but may be very handy in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we don't need to add a status field, but we can add the spec. See update.
// resourceGroups describes groups of resources. | ||
// Each resource group defines the list of resources and a list of flavors | ||
// that provide quotas for these resources. | ||
// resourceGroups can be up to 16. | ||
// +listType=atomic | ||
// +kubebuilder:validation:MaxItems=16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we mention that it does not guarantee the actual availability of resources?
Moreover, can we add examples, the same as v1alpha2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I rephrased a little the definition of nominalQuota below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The word, nominal
sounds good.
Thanks for the updates!
//+kubebuilder:printcolumn:name="Strategy",JSONPath=".spec.queueingStrategy",type=string,description="The queueing strategy used to prioritize workloads",priority=1 | ||
//+kubebuilder:printcolumn:name="Pending Workloads",JSONPath=".status.pendingWorkloads",type=integer,description="Number of pending workloads" | ||
//+kubebuilder:printcolumn:name="Admitted Workloads",JSONPath=".status.admittedWorkloads",type=integer,description="Number of admitted workloads that haven't finished yet",priority=1 | ||
//+kubebuilder:storageversion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you plan to add the conversion webhooks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most likely not. But that's part of the justification of going to beta: that from now on we would have a conversion webhook if there are breaking changes from v1beta1 to v1beta2 or v1.
But if you know of users running in prod, it would be good to have their feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most likely not. But that's part of the justification of going to beta: that from now on we would have a conversion webhook if there are breaking changes from v1beta1 to v1beta2 or v1.
It sounds good to me.
But if you know of users running in prod, it would be good to have their feedback.
I haven't known of users running in production, ever.
b1751f2
to
a1669ff
Compare
/retest |
apis/kueue/v1beta1/constants.go
Outdated
package v1beta1 | ||
|
||
const ( | ||
ResourceInUseFinalizerName = "kueue.k8s.io/resource-in-use" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this domain correct?
Probably, kueue.x-k8s.io/resource-in-use
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, this is my misunderstanding.
Since the API will be graduated to beta, kueue.k8s.io/resource-in-use
is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll stay in x-k8s. We have consulted with @liggitt in any case, to make sure we are following best practices in our first beta release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. Thanks for letting me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"kueue.k8s.io/resource-in-use"
or"kueue.x-k8s.io/resource-in-use"
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I swear I had changed it before :|
It should be kueue.x-k8s.io
. Fixed.
|
||
// ResourceFlavor is the Schema for the resourceflavors API. | ||
type ResourceFlavor struct { | ||
metav1.TypeMeta `json:",inline"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a switch to turn off a resource flavor in all of the queues (possibly as a follow up PR)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's beyond the scope of this PR and it deserves it's own kep with stories.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, how do reviewers feel about merging this as is and have a separate PR for swapping the controllers into the new APIs?
I'm ok.
// If the ClusterQueue belongs to a cohort, the sum of the quotas for each | ||
// (flavor, resource) combination defines the maximum quantity that can be | ||
// allocated by a ClusterQueue in the cohort. | ||
NominalQuota resource.Quantity `json:"quota"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NominalQuota resource.Quantity `json:"quota"` | |
NominalQuota resource.Quantity `json:"nominalQuota"` |
// cohort. Only quota from [resource, flavor] pairs listed in the CQ can be | ||
// borrowed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// cohort. Only quota from [resource, flavor] pairs listed in the CQ can be | |
// borrowed. | |
// cohort. Only borrowingLimit from [resources, flavors] pairs listed in the CQ can be | |
// borrowed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The word quota was intended. The documentation for how much quota is allowed to be borrowed is left for the description of the borrowingLimit
field itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds good to me.
✅ Deploy Preview for kubernetes-sigs-kueue canceled.
|
33cc687
to
ab31e4f
Compare
shortNames: | ||
- cq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the removal of shortName
intended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah.... too cryptic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds good to me.
In this case, we may need to remove kubebuiler markers from v1alpha1 API.
@@ -13,7 +13,8 @@ spec: | |||
listKind: ResourceFlavorList | |||
plural: resourceflavors | |||
shortNames: | |||
- rf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the removal of shortName
intended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same, too cryptic. flavor
is better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good.
Generally, LGTM. |
|
||
type ResourceGroup struct { | ||
// resources is the list of resources covered by the flavors in this group. | ||
Resources []corev1.ResourceName `json:"resources"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 with @mwielgus
// There could be up to 16 resources. | ||
// +listType=map | ||
// +listMapKey=name | ||
// +kubebuilder:validation:MaxItems=16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about MinItems here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good catch, added.
// combination that this ClusterQueue is allowed to borrow from the unused | ||
// quota of other ClusterQueues in the same cohort. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// combination that this ClusterQueue is allowed to borrow from the unused | |
// quota of other ClusterQueues in the same cohort. | |
// combination that this ClusterQueue is allowed to borrow from the unused | |
// quota of other ClusterQueues in the same cohort, but not guaranteed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure... I already mentioned this is unused quota. Or are you referring to something else? But please check the update for your previous comment.
ClusterQueueActive string = "Active" | ||
) | ||
|
||
type Usage struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where we used this? Didn't find them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I forgot to delete it :)
apis/kueue/v1beta1/constants.go
Outdated
package v1beta1 | ||
|
||
const ( | ||
ResourceInUseFinalizerName = "kueue.k8s.io/resource-in-use" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"kueue.k8s.io/resource-in-use"
or"kueue.x-k8s.io/resource-in-use"
?
|
||
// template is the Pod template. | ||
// | ||
// The only allowed fields in template.metadata are labels and annotations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean we'll validate this in Kueue? Or we'll ignore other fields directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we'll validate in Kueue.
// allocated by a ClusterQueue in the cohort. | ||
NominalQuota resource.Quantity `json:"nominalQuota"` | ||
|
||
// borrowingLimit is the maximum amount of quota for the [flavor, resource] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe describe that MaximumQuota = NominalQuota + BorrowingLimit
is more intuitive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See update.
@@ -19,10 +19,13 @@ package constants | |||
import "time" | |||
|
|||
const ( | |||
// QueueAnnotation is the annotation in the workload that holds the queue name. | |||
// QueueLabel is the label key in the workload that holds the queue name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// QueueLabel is the label key in the workload that holds the queue name. | |
// QueueLabel is the label key in the job that holds the queue name. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uhm... not sure. Others Job APIs should use the same label.
/retest |
It looks like the CI was not liking having different shortnames across versions. |
// | ||
// +kubebuilder:default=Never | ||
// +kubebuilder:validation:Enum=Never;LowerPriority;Any | ||
ReclaimWithinCohort PreemptionPolicy `json:"withinCohort,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ReclaimWithinCohort PreemptionPolicy `json:"withinCohort,omitempty"` | |
WithinCohort PreemptionPolicy `json:"withinCohort,omitempty"` |
In order to align the struct name and the json field name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, this was a mistake in #514. It should be reclaimWithinCohort
// | ||
// The type of the condition could be: | ||
// | ||
// - Admitted: the Workload was admitted through a ClusterQueue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extend the list with PodsReady
for consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
// Gatekeeper should be used to enforce more advanced policies. | ||
// Defaults to null which is a nothing selector (no namespaces eligible). | ||
// If set to an empty selector `{}`, then all namespaces are eligible. | ||
NamespaceSelector *metav1.LabelSelector `json:"namespaceSelector,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we can disambiguate this by naming it allowedNamespcesSelector
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't anticipate other kind of namespaceSelector, so I don't see a strong benefit. There is also precedence in the NetworkPolicy https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The confusion I heard once was that this field selects the Jobs submitted to the queue. It would be better if the name coveys that this is about policy.
@liggitt any thoughts on naming here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how does a namespace submit workloads to the cluster queue? what happens if a workload in a namespace not matched by this selector tries to use this cluster queue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A namespace submits workloads to a LocalQueue. A LocalQueue references a ClusterQueue.
If the selector doesn't match, the Workload gets a Condition indicating that the Workload is not admitted because it doesn't match the namespaceSelector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a basic form of policy enforcement, ideally the cluster admin sets up something like gatekeeper, but in most cases gatekeeper is an overkill.
// The list cannot be empty and it can contain up to 16 resources. | ||
// +kubebuilder:validation:MinItems=1 | ||
// +kubebuilder:validation:MaxItems=16 | ||
CoveredResources []corev1.ResourceName `json:"coveredResources"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read the design docs https://docs.google.com/document/d/1Uu4hfGxux4Wh_laqZMLxXdEVdty06Sb2DwB035hj700/edit?resourcekey=0-b7mU7mGPCkEfhjyYDsXOBg#heading=h.dkobprho4lxe and I got the intension about adding a slice of resource names as a visual aid, just want to make sure we all agree on this. cc @ahg-g
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to being a visual aid, it makes the API more explicit, which is a good principle.
But I agree that it might add burden, specially for resource groups with a single resource. But, on the other hand, ClusterQueues should be APIs that don't change often.
Resources []ResourceUsage `json:"resources"` | ||
} | ||
|
||
type ResourceUsage struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that means borrowed, not borrowing. my brain....
Thanks @alculquicondor for the great work, LGTMed. |
Thank you all |
Sounds good. I'm looking forward to kueue v1beta1 :) |
2d464a4
to
c1f7f44
Compare
@@ -19,10 +19,13 @@ package constants | |||
import "time" | |||
|
|||
const ( | |||
// QueueAnnotation is the annotation in the workload that holds the queue name. | |||
// QueueLabel is the label key in the workload that holds the queue name. | |||
QueueLabel = "kueue.x-k8s.io/queue-name" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revisit this label, I'm reading the api designs about MatchFields
https://github.com/kubernetes/kubernetes/blob/fc8b5668f506b6d71ddc5243a21f5d0c5f8271d3/pkg/apis/core/types.go#L2558-L2563
This field was designed specially for daemonSet pod scheduling, one reason why we avoid to reuse MatchExpressions
is because we can't model label as non-label usages(take daemonset for example, we're finding the exactly node not a set of nodes), in this case, it should be an unique localQueue name, not a set of queue names. Annotation is something we think not bad as an experimental feature, and considering we haven't populate queueName
to job yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you saying that we should stay with an annotation instead of a label?
I'm not understanding what you mean by a unique localQueue name. Multiple workloads can point to the same queue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean a workload can only belong to one localQueue, we're taking the name field into labels,, which seems like we're abusing the label.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This label is for the (custom) Job objects, not the Workloads. We call them workloads in general. Maybe we should call them jobs? not sure.
Change-Id: Ic4cc56f07e8ef5caa7b55ac02901a54708c35f20
Change-Id: I9a12e50eaa66c1897406b44e59f9e0813382645d
/close |
@alculquicondor: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind feature
/kind api-change
What this PR does / why we need it:
Adds a v1beta1 API with changes summarized in this doc https://docs.google.com/document/d/1Uu4hfGxux4Wh_laqZMLxXdEVdty06Sb2DwB035hj700/edit?usp=sharing&resourcekey=0-b7mU7mGPCkEfhjyYDsXOBg (join https://groups.google.com/a/kubernetes.io/g/wg-batch to access)
Which issue(s) this PR fixes:
Ref #23
Special notes for your reviewer:
The first commit contains a pure copy of the v1alpha2 API objects, to facilitate reviewing the diff.
This PR keeps the storage version as v1alpha2 so that tests continue passing. I plan to remove in a follow up PR, along with updating the implementation of the controllers.