-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-3243: respect pod topology spread after rolling upgrades #3244
KEP-3243: respect pod topology spread after rolling upgrades #3244
Conversation
Thanks! |
Can you link the discussion issue please. /assign |
Done. Added in the top. |
/unassign |
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/kep.yaml
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/kep.yaml
Outdated
Show resolved
Hide resolved
3be0398
to
cb1e11d
Compare
@ahg-g |
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades/README.md
Outdated
Show resolved
Hide resolved
@ahg-g |
/lgtm @wojtek-t this should be ready for PRR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From PRR perspective it seems ok - I added two comments for tests.
I would also encourage you to fill in the scalability section of PRR now, but I'm not going to block on it.
persisted Pod object, otherwise it is silently dropped; moreover, kube-scheduler | ||
will ignore the field and continue to behave as before. | ||
|
||
### Test Plan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please adopt to the new test plan guidelines:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
- Feature gate enable/disable tests | ||
- `MatchLabelKeys` in `TopologySpreadConstraint` works as expected | ||
- Benchmark tests: | ||
- Verify no significant performance degradation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In existing benchmarks? Or new ones?
I'm assuming existing don't exercise the newly added feature - are we ok with it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I plan to benchmark this using the exist cases in k8s.io/kubernetes/test/integration/scheduler_perf
to verify if there's no significant performance degradation.
Signed-off-by: Alex Wang <[email protected]>
Signed-off-by: Alex Wang <[email protected]>
85c24e3
to
ceee6ae
Compare
ceee6ae
to
1b45339
Compare
/lgtm Thanks! |
/approve |
/hold |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahg-g, denkensk, wojtek-t The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/label tide/merge-method-squash |
/hold cancel |
Signed-off-by: Alex Wang [email protected]
Discussion Link
kubernetes/kubernetes#98215
kubernetes/kubernetes#105661