Skip to content

Conversation

@lchrzaszcz
Copy link
Contributor

@lchrzaszcz lchrzaszcz commented Jun 2, 2025

What type of PR is this?

/kind documentation

What this PR does / why we need it:

This PR updates Topology-Aware Scheduling KEP to introduce Two-Level Scheduling and PodSet Chunk topology feature.

KEP for #5439

Which issue(s) this PR fixes:

Special notes for your reviewer:

Does this PR introduce a user-facing change?

NONE

@k8s-ci-robot k8s-ci-robot added kind/documentation Categorizes issue or PR as related to documentation. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jun 2, 2025
@netlify
Copy link

netlify bot commented Jun 2, 2025

Deploy Preview for kubernetes-sigs-kueue canceled.

Name Link
🔨 Latest commit e808294
🔍 Latest deploy log https://app.netlify.com/projects/kubernetes-sigs-kueue/deploys/6842fbc0ef8b3f00089ac1f5

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 2, 2025
@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jun 2, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @lchrzaszcz. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jun 2, 2025
@mimowo
Copy link
Contributor

mimowo commented Jun 2, 2025

cc @gabesaba @PBundyra

@mimowo
Copy link
Contributor

mimowo commented Jun 2, 2025

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 2, 2025
@lchrzaszcz lchrzaszcz mentioned this pull request Jun 2, 2025
3 tasks
@mimowo
Copy link
Contributor

mimowo commented Jun 2, 2025

Fixes #5439

change to Part of, or KEP for.

we have a robot which will close the issue once PR with Fixes before issue number is merged.


Extension: support for indicating the order of pods for external Jobs.

#### Story 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mimowo @tenzen-y

Should we include LWS for "two-level" scheduling?

I think there will similar work for JobSet/LWS.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, extending to "LWS" is definetly on our radar. However, for LWS we need to co-schedule PodSets rather than "chunk" them. So, we consider it a follow up.


| mode | PodSet size | 6 | 5 | 4 | 3 | 2 |
| -----------------------| ----------- | - | - | - | - | - |
| BestFit | 12 | 6 | . | 4 | . | 2 |
Copy link
Contributor

@PBundyra PBundyra Jun 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's worth noting that even with the BestFit algorithm there is no guarantee that there won't be any leftover capacity in the selected nodes, as there is already some rounding in the assumption nodes can accomodate an integer number of pods. The BestFit algorithm excellently mitigates resource fragmentation, and it's no doubt it's the way to go, but let's just keep in mind even that has some limits, and we won't mitigate fragmentation completely. Of course this reasoning applies to both single-level and two-level scheduling

Copy link
Contributor Author

@lchrzaszcz lchrzaszcz Jun 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a one-sentence disclaimer that the "tight fit" I'm referring to does not guarantee no dangling resources.

@lchrzaszcz lchrzaszcz force-pushed the update-tas-kep-with-two-level-scheduling branch from f05ab6e to e3480a3 Compare June 4, 2025 09:54
@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jun 4, 2025
metadata:
annotations:
kueue.x-k8s.io/podset-slice-required-topology: cloud.provider.com/topology-host
kueue.x-k8s.io/podset-slice-size: 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if this annotation kueue.x-k8s.io/podset-slice-size is not provided by user?

Do we;

  1. return error from validation webhook
  2. default it implicitly?
  3. default it explicitly in our Jobset webhook?

It seems setting it to anything different than "parallelism" in case of JobSet will be very exceptional. So, I would be leaning to default the annotation to parallelism when kueue.x-k8s.io/podset-slice-required-topology. Leaving this to be set by user every time by the user makes room for error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a good idea to provide the default for a JobSet. Right now in this PR: #5353 I'm assuming that the default is 1 pod (because technically TAS scheduling without slice topology is TAS scheduling with slice topology of the lowest topology level with size of 1 pod).

From the validation point of view I would say we should assume a default value of parallelism if it's JobSet, but return an error if it's something else (of course assuming the slice topology is requested).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, IIUC we throw an error for all CRDs requiring to set kueue.x-k8s.io/podset-slice-size explicitly, except for JobSet. For JobSet we have the sensible default: parallelism.

We will decide during implementation if we default it implicitly or explicitly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a sentence to clarify that for JobSet we will introduce a default of parallelism. Added also a separate section about validation of podset slice size.

annotations:
kueue.x-k8s.io/podset-preferred-topology: cloud.provider.com/topology-block
kueue.x-k8s.io/podset-slice-required-topology: cloud.provider.com/topology-host
kueue.x-k8s.io/podset-slice-size: 4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question about handling JobSets without this annotation, but with kueue.x-k8s.io/podset-slice-required-topology

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a sentence to clarify that for JobSet we will introduce a default of parallelism. Added also a separate section about validation of podset slice size.

Copy link
Contributor

@mimowo mimowo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. I just have a question to clarify about the handling the case without "kueue.x-k8s.io/podset-slice-size".

Copy link
Member

@tenzen-y tenzen-y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you update the story 2 Notes since we are working on this as part of alpha stage?

Note: not planned for [Alpha](https://github.com/kubernetes-sigs/kueue/tree/main/keps/2724-topology-aware-scheduling#alpha), to be evaluated for [Beta](https://github.com/kubernetes-sigs/kueue/tree/main/keps/2724-topology-aware-scheduling#beta).`

https://github.com/kubernetes-sigs/kueue/tree/main/keps/2724-topology-aware-scheduling#story-2

Comment on lines 323 to 324
In this example there will be 8 (2 ReplicatedJob instances with 4
"completions" each) worker pods and we say that we want to split
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if completions and parallelism are different values?
Do you want to say 2 ReplicatedJob instances with 4 "parallelism" each?

Copy link
Contributor Author

@lchrzaszcz lchrzaszcz Jun 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, thanks for spotting that!

Since slice size is equal to "completions" for the Job, the end result
is that each Job will be placed within a "host".

For now PodSet Slice topology has to be required (cannot be "preferred").
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean for now? Does this mean preferred will be supported in Beta stage?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not something we've planned, so I think it's just a poor choice of word that suggest it is planned. I'll change it to just state that without any assumption.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have currently any use-cases going beyond "required". I would suggest to indeed drop "For now", because it warrants questions. When we have such use cases we will need to update the KEP anyway.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for an explanation. Could you add it to Non Goal section?
Once we decide to support it, we can remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I've added it there.

Comment on lines +701 to +729

// PodSetSliceRequiredTopology indicates the topology level required by the PodSet slice, as
// indicated by the `kueue.x-k8s.io/podset-slice-required-topology` annotation.
//
// +optional
PodSetSliceRequiredTopology *string `json:"podSetSliceRequiredTopology,omitempty"`

// PodSetSliceSize indicates the size of a subgroup of pods in a PodSet for which
// Kueue finds a requested topology domain on a level defined
// in `kueue.x-k8s.io/podset-slice-required-topology` annotation.
//
// +optional
PodSetSliceSize *int32 `json:"podSetSliceSize,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

topologyRequest:
  podSetSlice:
    requiredTopology: xxx
    size: yyy
    preferred: zzz // This is not included in this proposal. for future expanding.

Do we need to consider future API expansion?
@mimowo Do you have any plans to expand the PodSet Slice capability in TAS?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mimowo Do you have any plans to expand the PodSet Slice capability in TAS?

No, the current API proposal is already covering all use-cases we currently have regarding PodSet slices.

There might be future extensions which are hard to predict as typically with future :).

Personally, I think flat structure as in the current KEP is ok for simplicity.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright. In that case, let's keep the current form.

mode. See the table below where the numbers are pods assigned to a particular node.
The header for columns dedicated to nodes correspond node's initial capacity.

| mode | PodSet size | 6 | 5 | 4 | 3 | 2 |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does PodSet size mean? PodSet count? PodSet Slice size?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a number of pods in PodSet. To make it more readable I have a few ideas how to rephrase it:

  • PodSet count - consistent with the code I guess
  • Pods count
  • Pods count in PodSet
  • Number of pods in PodSet
  • What do you think suits the most?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a number of pods in PodSet. To make it more readable I have a few ideas how to rephrase it:

  • PodSet count - consistent with the code I guess
  • Pods count
  • Pods count in PodSet
  • Number of pods in PodSet
  • What do you think suits the most?

IIUC, is it calculated by parallelism (Job) × count (PodSet)? If yes, let's say Pods count.
The PodSet count indicates .spec.podSet[*].count, I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm not mistaken, PodSet count is parallelism x replicas (ReplicatedJob), at least that's the whole PodSet.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the case of JobSet, replicated Job replicas corresponds to the PodSet count.
So, in case of JobSet, PodSet count (.spec.podSet[*].count) is replicatedJob replicas (.spec.template.spec.replicatedJobs[*].replicas).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a new contributor, so I might be mistaken, but looking here https://github.com/kubernetes-sigs/kueue/blob/main/pkg/controller/jobs/jobset/jobset_controller.go#L232, we multiply parallelism/completions and replicas

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevertheless, I suggest that I'll just rename that column to "Pods count", so it's more obvious. WDYT?

Copy link
Member

@tenzen-y tenzen-y Jun 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a new contributor, so I might be mistaken, but looking here https://github.com/kubernetes-sigs/kueue/blob/main/pkg/controller/jobs/jobset/jobset_controller.go#L232, we multiply parallelism/completions and replicas

yes, that's right. The above yaml place was not correct. Let me show those more accurately.
I wanted to show in the following:

  • PodSet Count: .spec.podSet[*].count (Workload)
  • Pod Count: .spec.replicatedJobs[*].replicas (JobSet) × .spec.replicatedJobs[*].template.spec.parallelism
  • Pod Count for more generic: .spec.podSet[*].count (Workload) × .spec.replicatedJobs[*].template.spec.parallelism

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevertheless, I suggest that I'll just rename that column to "Pods count", so it's more obvious. WDYT?

If this indicates my above definition, "Pods count" would be better.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup. Replaced it with Pods count


Explanation:
- `BestFit` - We prioritized 3rd node over 2nd node because 3rd node was a tight fit among all domains that could fit 2 slices. The last domain has been "optimized" to find the tight fit.
- `MostFreeCapacity` - We prioritized 3rd node over 2nd node because 3rd node was a tight fit among all domains that could fit 2 slices, so we assigned as much pods as possible there and filled it entirely.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uhm, it sounds weird. It looks like this violates MostFree strategy.
Why do we need to violate it only when MostFreeCapacity?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a great question. I was thinking about sorting domains based on state as that feels intuitive for "MostFree" strategy. However we've discussed that with @mimowo that this mode is deprecated and do not really want to invest time in that mode, so this is the result of just using "BestFit" logic without optimizing the last domain.

Do you think we should just apply sorting by "state" to MostFree mode?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and in fact I would like to drop MostFreeCapacity in 0.13. It was implemented first, but we quickly realized BestFit is better in all known cases, we introduced the MostFreeCapacity feature gate just as a bailout option in case there is a bug in BestFit. It is deprecated since 0.11. It makes like harder I would suggest sending a preparatory or follow up PR to drop it.

wdyt @tenzen-y ?

btw. for LeastFreeCapacity we would like to still maintain for a bit, because there might be use-cases for reducing fragmentation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lchrzaszcz I think the current algorithm looks like a mixed strategy between MostFree and LeastFree as we can see in your example: | MostFreeCapacity | 12 | 6 | 2 | 4 | . | . |

If we want to keep maintaining MostFreeCapacity strategy, we should not violate the strategy in case of Two-Level scheduling as well. However, as @mimowo mentioned in the above, he want to drop supporting the MostFreeCapacity strategy. So, Let's delete the strategy first before we implement the Two-Level scheduling feature.

Yes, and in fact I would like to drop MostFreeCapacity in 0.13. It was implemented first, but we quickly realized BestFit is better in all known cases, we introduced the MostFreeCapacity feature gate just as a bailout option in case there is a bug in BestFit. It is deprecated since 0.11. It makes like harder I would suggest sending a preparatory or follow up PR to drop it.

wdyt @tenzen-y ?

btw. for LeastFreeCapacity we would like to still maintain for a bit, because there might be use-cases for reducing fragmentation.

@mimowo Yes, that makes sense. As we discussed in TAS profiles KEP, we should drop those step by step.
Surely, we should remove the MostFreeCapacity TAS profile first.
For LeastFreeCapacity, let us consider how to grow the maturity and support TAS profiles in the future issue and KEP.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lchrzaszcz @mimowo I'm ok with refining MostFreeCapacity description when we delete the TAS profile.
In other words, we can move all MostFreeCapacity descriptions to the Alternative section.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed MostFreeCapacity from two-level scheduling descriptions (this PR). In the other PR (#5536) I'm moving MostFreeCapacity documentation to Alternatives section.

@mimowo
Copy link
Contributor

mimowo commented Jun 5, 2025

Thank you 👍 This is a great extension to TAS capabilities. I'm pretty sure there will pop up some questions as we go (as always 😄 ).

/lgtm
leaving the final approval to @tenzen-y

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 5, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 50634042ff1e05fcee89777c523205bcc9710f04

mode. See the table below where the numbers are pods assigned to a particular node.
The header for columns dedicated to nodes correspond node's initial capacity.

| mode | PodSet size | 6 | 5 | 4 | 3 | 2 |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a number of pods in PodSet. To make it more readable I have a few ideas how to rephrase it:

  • PodSet count - consistent with the code I guess
  • Pods count
  • Pods count in PodSet
  • Number of pods in PodSet
  • What do you think suits the most?

IIUC, is it calculated by parallelism (Job) × count (PodSet)? If yes, let's say Pods count.
The PodSet count indicates .spec.podSet[*].count, I think.


Explanation:
- `BestFit` - We prioritized 3rd node over 2nd node because 3rd node was a tight fit among all domains that could fit 2 slices. The last domain has been "optimized" to find the tight fit.
- `MostFreeCapacity` - We prioritized 3rd node over 2nd node because 3rd node was a tight fit among all domains that could fit 2 slices, so we assigned as much pods as possible there and filled it entirely.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lchrzaszcz I think the current algorithm looks like a mixed strategy between MostFree and LeastFree as we can see in your example: | MostFreeCapacity | 12 | 6 | 2 | 4 | . | . |

If we want to keep maintaining MostFreeCapacity strategy, we should not violate the strategy in case of Two-Level scheduling as well. However, as @mimowo mentioned in the above, he want to drop supporting the MostFreeCapacity strategy. So, Let's delete the strategy first before we implement the Two-Level scheduling feature.

Yes, and in fact I would like to drop MostFreeCapacity in 0.13. It was implemented first, but we quickly realized BestFit is better in all known cases, we introduced the MostFreeCapacity feature gate just as a bailout option in case there is a bug in BestFit. It is deprecated since 0.11. It makes like harder I would suggest sending a preparatory or follow up PR to drop it.

wdyt @tenzen-y ?

btw. for LeastFreeCapacity we would like to still maintain for a bit, because there might be use-cases for reducing fragmentation.

@mimowo Yes, that makes sense. As we discussed in TAS profiles KEP, we should drop those step by step.
Surely, we should remove the MostFreeCapacity TAS profile first.
For LeastFreeCapacity, let us consider how to grow the maturity and support TAS profiles in the future issue and KEP.

@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 5, 2025
@k8s-ci-robot k8s-ci-robot requested a review from mimowo June 5, 2025 14:48
Copy link
Member

@tenzen-y tenzen-y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!
Thank you!
/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 6, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: e45813b9fb7e9fddede8c2717c7c4b82d1321e0c

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: lchrzaszcz, tenzen-y

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 6, 2025
@tenzen-y
Copy link
Member

tenzen-y commented Jun 6, 2025

@lchrzaszcz could you fix CI with make toc-update?

@lchrzaszcz
Copy link
Contributor Author

@tenzen-y Sure! I'll make sure it is fixed in follow-up PR here:#5538

@tenzen-y
Copy link
Member

tenzen-y commented Jun 6, 2025

@tenzen-y Sure! I'll make sure it is fixed in follow-up PR here:#5538

Sorry? I meant to fix this PR CI.

@tenzen-y
Copy link
Member

tenzen-y commented Jun 6, 2025

Once you perform make toc-update in your local and push it to here, the CI errors will be gone away.

@lchrzaszcz
Copy link
Contributor Author

Oh, right, sorry, I thought it got merged already and it failed on main branch. I've just updated TOC but it resulted in no changes. I'll just try to rerun that CI. If it fails again it might be the case that merge with current main makes it fail (if CI merges with main before tests), I'll rebase it then.
/test pull-kueue-verify-main

@lchrzaszcz lchrzaszcz force-pushed the update-tas-kep-with-two-level-scheduling branch from b743cae to 82644ff Compare June 6, 2025 14:29
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 6, 2025
@k8s-ci-robot k8s-ci-robot requested a review from tenzen-y June 6, 2025 14:29
Copy link
Contributor

@mimowo mimowo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 6, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: d6feb784e214e7277430da87b398c01985c63e09

@k8s-ci-robot k8s-ci-robot merged commit f8767fb into kubernetes-sigs:main Jun 6, 2025
8 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v0.13 milestone Jun 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/documentation Categorizes issue or PR as related to documentation. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants