-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
protect kubernetes community owned API groups in CRDs #1111
Conversation
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
Sgtm. Some clarification of the PR link would be good. |
updated for comments. |
one addition requested in #1111 (comment) overall approach looks good to me |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really a fan of yet another annotation
TL;DR this should really be policy, then enforcement can be made from said policy. |
I think that is exactly what this KEP is and does. We've had a well-defined policy for 11 months here: kubernetes/community#2433 . It was written, discussed, and reviewed thoroughly. The fact that it's apparently not well known seems to be an issue of enforcement. This KEP takes the policy and provides a simple enforcement mechanism that makes these standards unignore-able without adding significant friction along the way. |
It is well defined for core api's but it negates the larger ecosystem, which is where I think policy should be written then made enforceable. We currently do not have any recommended guidelines for the community of non-core CRDs that are being published around k8s core. |
https://github.com/kubernetes/community/blob/master/sig-architecture/api-review-process.md#what-apis-need-to-be-reviewed is the written policy that covers This is the proposal for the mechanism to make that policy enforceable. |
Then how does this impact the rest of the community that may overlap with that namespace, but have not changed yet? What are the recommendations wrt to naming? If the current policy does not take that question into account, which it does not, I'd assert it requires refinement. |
This is a really interesting idea... I have some questions / concerns: What do we do about projects that unknowingly violated this? How can we make this more discoverable? I had no idea that this review guidelines doc existed, I suspect many others in the project don't either, especially those working on SIG projects. Can we recommend namespacing guidelines for SIG projects that are unlikely to be core APIs but might want to use CRD storage and might not be the best use of time to review? EG if I add some CRDs to sigs.k8s.io/slack-infra for some kubernetes.slack.com configuration, or the "component config" style config for sigs.k8s.io/kind... I sort of doubt API reviewing these is the most productive route vs some alternate API group, but I don't know what the correct namespace would be would be. |
* charts(kong): update to kong-2.4.0 generator command: 'kong-2.4.0' generator command version: 2b9dc2a * release(v0.9.0) update operator metadata and docs Release 0.9.0 and change the kongs CRD API group from charts.helm.k8s.io to charts.konghq.com. The new group is not in one of the protected groups established by kubernetes/enhancements#1111. This operator CRD should not use a protected group as it is not a core part of the Kubernetes project. This change makes the CRD compatible with Kubernetes >=1.22. However, it breaks compatibility with previous versions of the operator. As such, 0.9.0 has no replace version: it requires a fresh operator install and a fresh set of Kong CRs. * test: update microk8s to 1.22 * test: update kubectl to 1.22.2 * test: update Ingress API version * feat: support Ingress v1 * test: remove Ingress waits Remove the Ingress status waits and add retry configuration to curl when validating the Ingress configuration. KIC 2.0+ handles status updates for non-LoadBalancer Services differently than earlier versions. Previously, KIC would set a status with a 0-length list of ingresses if the proxy Service was not type Loadbalancer, e.g. status: loadBalancer: ingress: - {} As of KIC 2.0, no status is set if the Service is not type LoadBalancer, e.g. status: loadBalancer: {} This change to the operator tests confirms that Ingress configuration was successfully applied to the proxy using requests through the proxy only. These now run immediately after the upstream Deployment becomes available, however, so they may run before the controller has ingested Ingress configuration or observed Endpoint updates. To account for this, the curl checks are now wrapped in wait_for to allow a reasonable amount of time for the controller to update configuration. * fix: update ingress example in README.md to v1 * feat: update OLM maintainer info Co-authored-by: Shane Utt <[email protected]> Co-authored-by: Michał Flendrich <[email protected]>
apiextensions.k8s.io/v1beta1 is removed as of K8s v1.22 [1], so all CRDs have to be updated to apiextensions.k8s.io/v1. This commit does the upgrade for the turndownschedule CRD. As part of the API updates, K8s is enforcing things grouped under *.k8s.io to be approved [2] because they are actually supposed to be Kuberenetes community-managed APIs [3]. So this commit also changes the CRD from: turndownschedules.kubecost.k8s.io to turndownschedules.kubecost.com This is in-line with K8s rules and links to our main domain. Tested by applying cluster-turndown-full.yaml and example-schedule.yaml successfully. [1] https://cloud.google.com/kubernetes-engine/docs/deprecations/apis-1-22 [2] kubernetes/enhancements#1111 [3] kubernetes/enhancements#1111 (comment)
Hi, I have an old crd with group "scheduling.incubator.k8s.io", how can i pass the check and work for me? I add annotation following is my part of crd definations apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: queues.scheduling.incubator.k8s.io
annotations:
api-approved.kubernetes.io: "unapproved, experimental-only"
spec:
group: scheduling.incubator.k8s.io
names:
kind: Queue
listKind: QueueList
plural: queues
shortNames:
- q
singular: queue
scope: Cluster |
Not exporting the cluster-scoped part yet, not sure about ability to support it. The CRDs and APIExports were produced as follows. These were produced by the following bashery. The following function converts a `kubectl api-resources` listing into a listing of arguments to the kcp crd-puller. ```bash function rejigger() { if [[ $# -eq 4 ]] then gv="$2" else gv="$3" fi case "$gv" in (*/*) group=.$(echo "$gv" | cut -f1 -d/) ;; (*) group="" esac echo "${1}$group" } ``` With `kubectl` configured to manipulate a kcp workspace, the following command captures the listing of resources built into that kcp workspace. ```bash kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt ``` With `kubectl` configured to manipulate a kind cluster, the following commands capture the resource listing split into namespaced and cluster-scoped. ```bash kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt ``` With CWD=config/kube/exports/namespaced, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt) ``` With CWD=config/kube/exports/cluster-scoped, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt) ``` Sadly, kubernetes/kubernetes#118698 is a thing. So I manually hacked the CRD for jobs. Sadly, the filenames produced by the crd-puller are not loved by apigen. The following function renames one file as needed. ```bash function fixname() { rg=${1%%.yaml} case $rg in (*.*) g=$(echo $rg | cut -d. -f2-) r=$(echo $rg | cut -d. -f1);; (*) g=core.k8s.io r=$rg;; esac mv ${rg}.yaml ${g}_${r}.yaml } ``` In each of those CRD directories, ```bash for fn in *.yaml; do fixname $fn; done ``` Penultimately, with CWD=config/kube, ```bash ../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced ../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped ``` Finally, kubernetes/enhancements#1111 applies to APIExport/APIBinding as well as to CRDs. And the CRD puller does not know anything about this (not that it would help?). I manually hacked the namespaced APIResource files that needed it to have an `api-approved.kubernetes.io` annotation. It turns out that the checking in the apiserver only requires that the annotation's value parse as a URL (any URL will do). Signed-off-by: Mike Spreitzer <[email protected]>
Not exporting the cluster-scoped part yet, not sure about ability to support it. Updated example1 to exercise this by switching the common workload from a Deployment object to a ReplicaSet object. The CRDs and APIExports were produced as follows. These were produced by the following bashery. The following function converts a `kubectl api-resources` listing into a listing of arguments to the kcp crd-puller. ```bash function rejigger() { if [[ $# -eq 4 ]] then gv="$2" else gv="$3" fi case "$gv" in (*/*) group=.$(echo "$gv" | cut -f1 -d/) ;; (*) group="" esac echo "${1}$group" } ``` With `kubectl` configured to manipulate a kcp workspace, the following command captures the listing of resources built into that kcp workspace. ```bash kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt ``` With `kubectl` configured to manipulate a kind cluster, the following commands capture the resource listing split into namespaced and cluster-scoped. ```bash kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt ``` With CWD=config/kube/exports/namespaced, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt) ``` With CWD=config/kube/exports/cluster-scoped, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt) ``` Sadly, kubernetes/kubernetes#118698 is a thing. So I manually hacked the CRD for jobs. Sadly, the filenames produced by the crd-puller are not loved by apigen. The following function renames one file as needed. ```bash function fixname() { rg=${1%%.yaml} case $rg in (*.*) g=$(echo $rg | cut -d. -f2-) r=$(echo $rg | cut -d. -f1);; (*) g=core.k8s.io r=$rg;; esac mv ${rg}.yaml ${g}_${r}.yaml } ``` In each of those CRD directories, ```bash for fn in *.yaml; do fixname $fn; done ``` Penultimately, with CWD=config/kube, ```bash ../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced ../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped ``` Finally, kubernetes/enhancements#1111 applies to APIExport/APIBinding as well as to CRDs. And the CRD puller does not know anything about this (not that it would help?). I manually hacked the namespaced APIResource files that needed it to have an `api-approved.kubernetes.io` annotation. It turns out that the checking in the apiserver only requires that the annotation's value parse as a URL (any URL will do). Signed-off-by: Mike Spreitzer <[email protected]>
Not exporting the cluster-scoped part yet, not sure about ability to support it. Updated example1 to exercise this by switching the common workload from a Deployment object to a ReplicaSet object. The CRDs and APIExports were produced as follows. These were produced by the following bashery. The following function converts a `kubectl api-resources` listing into a listing of arguments to the kcp crd-puller. ```bash function rejigger() { if [[ $# -eq 4 ]] then gv="$2" else gv="$3" fi case "$gv" in (*/*) group=.$(echo "$gv" | cut -f1 -d/) ;; (*) group="" esac echo "${1}$group" } ``` With `kubectl` configured to manipulate a kcp workspace, the following command captures the listing of resources built into that kcp workspace. ```bash kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt ``` With `kubectl` configured to manipulate a kind cluster, the following commands capture the resource listing split into namespaced and cluster-scoped. ```bash kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt ``` With CWD=config/kube/exports/namespaced, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt) ``` With CWD=config/kube/exports/cluster-scoped, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt) ``` Sadly, kubernetes/kubernetes#118698 is a thing. So I manually hacked the CRD for jobs. Sadly, the filenames produced by the crd-puller are not loved by apigen. The following function renames one file as needed. ```bash function fixname() { rg=${1%%.yaml} case $rg in (*.*) g=$(echo $rg | cut -d. -f2-) r=$(echo $rg | cut -d. -f1);; (*) g=core.k8s.io r=$rg;; esac mv ${rg}.yaml ${g}_${r}.yaml } ``` In each of those CRD directories, ```bash for fn in *.yaml; do fixname $fn; done ``` Penultimately, with CWD=config/kube, ```bash ../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced ../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped ``` Finally, kubernetes/enhancements#1111 applies to APIExport/APIBinding as well as to CRDs. And the CRD puller does not know anything about this (not that it would help?). I manually hacked the namespaced APIResource files that needed it to have an `api-approved.kubernetes.io` annotation. It turns out that the checking in the apiserver only requires that the annotation's value parse as a URL (any URL will do). Signed-off-by: Mike Spreitzer <[email protected]>
Not exporting the cluster-scoped part yet, not sure about ability to support it. Updated example1 to exercise this by switching the common workload from a Deployment object to a ReplicaSet object. Also updated example1 to use `kubestellar init` because that now does a lot more than just create one workspace and do one `kubectl apply`. The CRDs and APIExports were produced as follows. These were produced by the following bashery. The following function converts a `kubectl api-resources` listing into a listing of arguments to the kcp crd-puller. ```bash function rejigger() { if [[ $# -eq 4 ]] then gv="$2" else gv="$3" fi case "$gv" in (*/*) group=.$(echo "$gv" | cut -f1 -d/) ;; (*) group="" esac echo "${1}$group" } ``` With `kubectl` configured to manipulate a kcp workspace, the following command captures the listing of resources built into that kcp workspace. ```bash kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt ``` With `kubectl` configured to manipulate a kind cluster, the following commands capture the resource listing split into namespaced and cluster-scoped. ```bash kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt ``` With CWD=config/kube/exports/namespaced, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt) ``` With CWD=config/kube/exports/cluster-scoped, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt) ``` I manually deleted the four CRDs from https://github.com/kcp-dev/kcp/tree/v0.11.0/config/rootcompute/kube-1.24 . Sadly, kubernetes/kubernetes#118698 is a thing. So I manually hacked the CRD for jobs. Sadly, the filenames produced by the crd-puller are not loved by apigen. The following function renames one file as needed. ```bash function fixname() { rg=${1%%.yaml} case $rg in (*.*) g=$(echo $rg | cut -d. -f2-) r=$(echo $rg | cut -d. -f1);; (*) g=core.k8s.io r=$rg;; esac mv ${rg}.yaml ${g}_${r}.yaml } ``` In each of those CRD directories, ```bash for fn in *.yaml; do fixname $fn; done ``` Penultimately, with CWD=config/kube, ```bash ../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced ../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped ``` Finally, kubernetes/enhancements#1111 applies to APIExport/APIBinding as well as to CRDs. And the CRD puller does not know anything about this (not that it would help?). I manually hacked the namespaced APIResource files that needed it to have an `api-approved.kubernetes.io` annotation. It turns out that the checking in the apiserver only requires that the annotation's value parse as a URL (any URL will do). Signed-off-by: Mike Spreitzer <[email protected]>
Not exporting the cluster-scoped part yet, not sure about ability to support it. Updated example1 to exercise this by switching the common workload from a Deployment object to a ReplicaSet object. Also updated example1 to use `kubestellar init` because that now does a lot more than just create one workspace and do one `kubectl apply`. The CRDs and APIExports were produced as follows. These were produced by the following bashery. The following function converts a `kubectl api-resources` listing into a listing of arguments to the kcp crd-puller. ```bash function rejigger() { if [[ $# -eq 4 ]] then gv="$2" else gv="$3" fi case "$gv" in (*/*) group=.$(echo "$gv" | cut -f1 -d/) ;; (*) group="" esac echo "${1}$group" } ``` With `kubectl` configured to manipulate a kcp workspace, the following command captures the listing of resources built into that kcp workspace. ```bash kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt ``` With `kubectl` configured to manipulate a kind cluster, the following commands capture the resource listing split into namespaced and cluster-scoped. ```bash kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt ``` With CWD=config/kube/exports/namespaced, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt) ``` With CWD=config/kube/exports/cluster-scoped, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt) ``` I manually deleted the four CRDs from https://github.com/kcp-dev/kcp/tree/v0.11.0/config/rootcompute/kube-1.24 . Sadly, kubernetes/kubernetes#118698 is a thing. So I manually hacked the CRD for jobs. Sadly, the filenames produced by the crd-puller are not loved by apigen. The following function renames one file as needed. ```bash function fixname() { rg=${1%%.yaml} case $rg in (*.*) g=$(echo $rg | cut -d. -f2-) r=$(echo $rg | cut -d. -f1);; (*) g=core.k8s.io r=$rg;; esac mv ${rg}.yaml ${g}_${r}.yaml } ``` In each of those CRD directories, ```bash for fn in *.yaml; do fixname $fn; done ``` Penultimately, with CWD=config/kube, ```bash ../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced ../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped ``` Finally, kubernetes/enhancements#1111 applies to APIExport/APIBinding as well as to CRDs. And the CRD puller does not know anything about this (not that it would help?). I manually hacked the namespaced APIResource files that needed it to have an `api-approved.kubernetes.io` annotation. It turns out that the checking in the apiserver only requires that the annotation's value parse as a URL (any URL will do). Signed-off-by: Mike Spreitzer <[email protected]>
Not exporting the cluster-scoped part yet, not sure about ability to support it. Updated example1 to exercise this by switching the common workload from a Deployment object to a ReplicaSet object. Also updated example1 to use `kubestellar init` because that now does a lot more than just create one workspace and do one `kubectl apply`. The CRDs and APIExports were produced as follows. These were produced by the following bashery. The following function converts a `kubectl api-resources` listing into a listing of arguments to the kcp crd-puller. ```bash function rejigger() { if [[ $# -eq 4 ]] then gv="$2" else gv="$3" fi case "$gv" in (*/*) group=.$(echo "$gv" | cut -f1 -d/) ;; (*) group="" esac echo "${1}$group" } ``` With `kubectl` configured to manipulate a kcp workspace, the following command captures the listing of resources built into that kcp workspace. ```bash kubectl api-resources | grep -v APIVERSION | while read line; do rejigger $line; done > /tmp/kcp-rgs.txt ``` With `kubectl` configured to manipulate a kind cluster, the following commands capture the resource listing split into namespaced and cluster-scoped. ```bash kubectl api-resources | grep -v APIVERSION | grep -w true | while read line; do rejigger $line; done > /tmp/kind-ns-rgs.txt kubectl api-resources | grep -v APIVERSION | grep -w false | while read line; do rejigger $line; done > /tmp/kind-cs-rgs.txt ``` With CWD=config/kube/exports/namespaced, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-ns-rgs.txt) ``` With CWD=config/kube/exports/cluster-scoped, ```bash crd-puller --kubeconfig $KUBECONFIG $(grep -v -f /tmp/kcp-rgs.txt /tmp/kind-cs-rgs.txt) ``` I manually deleted the four CRDs from https://github.com/kcp-dev/kcp/tree/v0.11.0/config/rootcompute/kube-1.24 . Sadly, kubernetes/kubernetes#118698 is a thing. So I manually hacked the CRD for jobs. Sadly, the filenames produced by the crd-puller are not loved by apigen. The following function renames one file as needed. ```bash function fixname() { rg=${1%%.yaml} case $rg in (*.*) g=$(echo $rg | cut -d. -f2-) r=$(echo $rg | cut -d. -f1);; (*) g=core.k8s.io r=$rg;; esac mv ${rg}.yaml ${g}_${r}.yaml } ``` In each of those CRD directories, ```bash for fn in *.yaml; do fixname $fn; done ``` Penultimately, with CWD=config/kube, ```bash ../../hack/tools/apigen --input-dir crds/namespaced --output-dir exports/namespaced ../../hack/tools/apigen --input-dir crds/cluster-scoped --output-dir exports/cluster-scoped ``` Finally, kubernetes/enhancements#1111 applies to APIExport/APIBinding as well as to CRDs. And the CRD puller does not know anything about this (not that it would help?). I manually hacked the namespaced APIResource files that needed it to have an `api-approved.kubernetes.io` annotation. It turns out that the checking in the apiserver only requires that the annotation's value parse as a URL (any URL will do). Signed-off-by: Mike Spreitzer <[email protected]>
For anyone reading this in future, the KEP now lives here: https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/2337-k8s.io-group-protection |
…ow#1298) * Migrate Spark CRDs from v1beta1 to v1 Signed-off-by: Daniel AguadoAraujo <[email protected]> * Add extra printer columns for CRDs. Bump chart version Signed-off-by: Daniel AguadoAraujo <[email protected]> * Update CRDs definition files on app manifest Signed-off-by: Daniel AguadoAraujo <[email protected]> * Add annotation on CRDs to ignore the new policy to not use CRD groups k8s.io or kubernetes.io kubernetes/enhancements#1111 Signed-off-by: Daniel AguadoAraujo <[email protected]>
This commit adds the following annotation `api-approved.kubernetes.io: \ "https://github.com/kubernetes-csi/external-snapshot-metadata/pull/2"`. Refer to kubernetes/enhancements#1111 for more details. Signed-off-by: Rakshith R <[email protected]>
API groups are organized by namespace, similar to java packages.
authorization.k8s.io
is one example. When users createCRDs, they get to specify an API group and their type will be injected into that group by the kube-apiserver.
The *.k8s.io or *.kubernetes.io groups are owned by the Kubernetes community and protected by API review (see What APIs need to be reviewed,
to ensure consistency and quality. To avoid confusion in our API groups and prevent accidentally claiming a
space inside of the kubernetes API groups, the kube-apiserver needs to be updated to protect these reserved API groups.
This KEP proposes adding an
api-approved.kubernetes.io
annotation to CustomResourceDefinition. This is only needed ifthe CRD group is
k8s.io
,kubernetes.io
, or ends with.k8s.io
,.kubernetes.io
. The value should be a link to thepull request where the API has been approved.
/assign @jpbetz @liggitt @sttts
@kubernetes/sig-api-machinery-api-reviews @kubernetes/sig-architecture-api-reviews