-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Graduate API to beta #23
Comments
/kind feature |
Using For the time being, and to avoid the awkward |
I think the value is the direct association to the Kubernetes project. I prefer having it in the API name. But what do others think? |
Anyway, it will be moved to core k8s in the future, so why not choose to use |
Assuming that you don't mean |
The
If that is a one time thing, then we can pursue it; but I am not sure if we want it if it requires a review for every change we make. |
It probably does. But maybe we can just rename to |
+1 We can mark it unapproved at the alpha version. |
This is discouraged, and it only adds to the confusion. Users don't look at CRD definitions. ok, I don't want to delay things, I am fine with keeping the |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle stale maybe for 0.3.0 :) |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@alculquicondor: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen /lifecycle frozen we need to get back to this when targeting a v1beta1 API. |
I'm starting a list of potential changes to the API. See Issue description. |
cc @kerthcet for feedback (and anyone already in the thread, of course) |
I'm ok with this, admission presents the actual state of a workload.
Before, we named them
Can you provide some more context why we need this? It looks like looking for workloads via label selector. |
No, that would have been very confusing because they already mean something in the pod spec. I'll start a separate doc to discuss options.
It's actually about filtering Jobs using the queue name. |
When I joined the kueue project, one of the most confusing of the kueue specification was the relationship between IMO, the |
It made me think of adding queueName to job's spec kubernetes/enhancements#3397, we can filter jobs with field-selector then. |
Yes, that would be ideal, but that KEP got significant push back, so I don't see it happening anytime soon. |
We might also need to add ObjectMeta into each PodSet template. Cluster autoscaler needs the metadata to properly scale up. |
Is this for scaling up in advance? Or autoscaler only watching the unschedulable pods, who contains the metadata. |
Yes, to scale up in advance |
I've created a summary doc with the proposed changes as we graduate to beta (also available in the issue description): https://docs.google.com/document/d/1Uu4hfGxux4Wh_laqZMLxXdEVdty06Sb2DwB035hj700/edit?usp=sharing&resourcekey=0-b7mU7mGPCkEfhjyYDsXOBg Some of the enhancements come from UX study sessions that we have conducted, see notes here: https://docs.google.com/document/d/1xbN46OLuhsXXHeqZrKrl9I57kpFQ2yqiYdOx0sHZC4Q/edit?usp=sharing I have a WIP in #532 |
/assign @alculquicondor |
# Conflicts: # config/components/manager/controller_manager_config.yaml
# Conflicts: # config/components/manager/controller_manager_config.yaml
# Conflicts: # config/components/manager/controller_manager_config.yaml
# Conflicts: # config/components/manager/controller_manager_config.yaml
# Conflicts: # config/components/manager/controller_manager_config.yaml
Currently, this would be very cumbersome due to the lack of support from kubebuilder kubernetes-sigs/controller-tools#656
Once the support is added and we are ready to publish a v1beta1, we should consider renaming the api group. Note that this requires an official api-review kubernetes/enhancements#1111
Summary doc: https://docs.google.com/document/d/1Uu4hfGxux4Wh_laqZMLxXdEVdty06Sb2DwB035hj700/edit?usp=sharing&resourcekey=0-b7mU7mGPCkEfhjyYDsXOBg (join https://groups.google.com/a/kubernetes.io/g/wg-batch to access)
Potential changes when graduating:
admission
from Workload spec into status (from Enforce timeout for podsReady #498)min
,max
into something easier to understand.ObjectMeta
into eachPodSet
template.The text was updated successfully, but these errors were encountered: