Skip to content

Add priority value tests for default priorityClassNames#30268

Closed
CoreyCook8 wants to merge 9 commits intoopenshift:mainfrom
CoreyCook8:add_e2e_test_for_priority
Closed

Add priority value tests for default priorityClassNames#30268
CoreyCook8 wants to merge 9 commits intoopenshift:mainfrom
CoreyCook8:add_e2e_test_for_priority

Conversation

@CoreyCook8
Copy link
Copy Markdown

Because of the issue described here: kubernetes/kubernetes#133442

We are setting the priority field on static pods in the PRs:
openshift/cluster-etcd-operator#1476
openshift/cluster-kube-scheduler-operator#572
openshift/cluster-kube-apiserver-operator#1915
openshift/cluster-kube-controller-manager-operator#865

To do this, we need to have a test which verifies that the expected values for priority are in line with the priorityClassName

@openshift-ci openshift-ci Bot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Sep 17, 2025
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Sep 17, 2025

Hi @CoreyCook8. Thanks for your PR.

I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Sep 17, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: CoreyCook8
Once this PR has been reviewed and has the lgtm label, please assign neisw for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ingvagabund
Copy link
Copy Markdown
Member

/ok-to-test

@openshift-ci openshift-ci Bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 17, 2025
@openshift-trt
Copy link
Copy Markdown

openshift-trt Bot commented Sep 18, 2025

Job Failure Risk Analysis for sha: 753d94c

Job Name Failure Risk
pull-ci-openshift-origin-main-e2e-hypershift-conformance Medium
[sig-sippy] infrastructure should work
This test has passed 85.71% of 14 runs on release 4.21 [Architecture:amd64 FeatureSet:default Installer:hypershift JobTier:standard Network:ovn NetworkStack:ipv4 Owner:eng Platform:aws Procedure:none SecurityMode:default Topology:external Upgrade:micro] in the last week.

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New Test Risks for sha: 753d94c

Job Name New Test Risk
pull-ci-openshift-origin-main-e2e-aws-ovn High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-cgroupsv2 High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-cgroupsv2 High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-edge-zones High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-edge-zones High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-fips High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-fips High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-microshift High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-microshift High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-upgrade High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-upgrade High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-proxy High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-proxy High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-azure High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-azure High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-gcp-ovn High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-gcp-ovn High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
(...showing 20 of 42 rows)

New tests seen in this PR at sha: 753d94c

  • "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" [Total: 21, Pass: 0, Fail: 21, Flake: 0]
  • "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" [Total: 21, Pass: 0, Fail: 21, Flake: 0]

Comment thread test/extended/pods/priorityclasses.go Outdated

It("system-node-critical=2000001000", func() {
By("creating the pods")
err := oc.Run("create").Args("-f", systemNodeCriticalPodFile).Execute()
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each pod needs to be cleaned/deleted after it's used. Also, better to create both pods in a temporary namespace.

@openshift-trt
Copy link
Copy Markdown

openshift-trt Bot commented Sep 19, 2025

Job Failure Risk Analysis for sha: 389b19a

Job Name Failure Risk
pull-ci-openshift-origin-main-e2e-aws-disruptive Low
Job run should complete before timeout
This test has passed 57.14% of 14 runs on release 4.21 [Architecture:amd64 FeatureSet:default Installer:ipi JobTier:hidden Network:ovn NetworkStack:ipv4 Owner:eng Platform:aws Procedure:none SecurityMode:default Topology:ha Upgrade:micro-downgrade] in the last week.
pull-ci-openshift-origin-main-e2e-openstack-ovn IncompleteTests
Tests for this run (25) are below the historical average (1533): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New Test Risks for sha: 389b19a

Job Name New Test Risk
pull-ci-openshift-origin-main-e2e-aws-ovn High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-cgroupsv2 High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-cgroupsv2 High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-edge-zones High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-edge-zones High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-fips High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-fips High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-microshift High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-microshift High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-upgrade High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-upgrade High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-proxy High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-aws-proxy High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-gcp-ovn High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-gcp-ovn High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-gcp-ovn-techpreview High - "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
pull-ci-openshift-origin-main-e2e-gcp-ovn-techpreview High - "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" is a new test that failed 1 time(s) against the current commit
(...showing 20 of 38 rows)

New tests seen in this PR at sha: 389b19a

  • "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" [Total: 19, Pass: 0, Fail: 19, Flake: 0]
  • "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" [Total: 19, Pass: 0, Fail: 19, Flake: 0]

@openshift-trt
Copy link
Copy Markdown

openshift-trt Bot commented Sep 19, 2025

Job Failure Risk Analysis for sha: 47ce8ab

Job Name Failure Risk
pull-ci-openshift-origin-main-e2e-openstack-ovn IncompleteTests
Tests for this run (104) are below the historical average (1552): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New tests seen in this PR at sha: 47ce8ab

  • "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" [Total: 19, Pass: 19, Fail: 0, Flake: 0]
  • "[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" [Total: 19, Pass: 19, Fail: 0, Flake: 0]

spec:
containers:
- name: busybox
image: busybox
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

busybox image has not been used in the tests before. Would you please try image-registry.openshift-image-registry.svc:5000/openshift/tools:latest instead? To reduce the number of pulled images. E.g. https://github.com/openshift/origin/blob/main/test/extended/testdata/cmd/test/cmd/testdata/rollingupdate-daemonset.yaml#L30-L31 runs sleep command through tools:latest as well.

@ingvagabund
Copy link
Copy Markdown
Member

@CoreyCook8 other than that this looks good to go.

@bertinatto would you please take a look for the approval?

@bertinatto
Copy link
Copy Markdown
Member

@CoreyCook8 other than that this looks good to go.

@bertinatto would you please take a look for the approval?

@ingvagabund @CoreyCook8 I’m deferring approvals in this repo to TRT so they’re aware of new tests being added. This makes it easier to track if the tests start failing.

@stbenjam
Copy link
Copy Markdown
Member

stbenjam commented Sep 23, 2025

"[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical=2000000000 [Suite:openshift/conformance/parallel]" [Total: 19, Pass: 19, Fail: 0, Flake: 0]
"[sig-node] Pod priority should match the default priorityClassName values system-node-critical=2000001000 [Suite:openshift/conformance/parallel]" [Total: 19, Pass: 19, Fail: 0, Flake: 0]

Please do not include specific values in test names (2000000000 / 2000001000), as they're subject to change (even if extremely unlikely). You can include the value in the output instead.

@openshift-trt
Copy link
Copy Markdown

openshift-trt Bot commented Sep 24, 2025

Job Failure Risk Analysis for sha: 84ad3df

Job Name Failure Risk
pull-ci-openshift-origin-main-e2e-aws-ovn-fips IncompleteTests
Tests for this run (104) are below the historical average (2583): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New tests seen in this PR at sha: 84ad3df

  • "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical [Suite:openshift/conformance/parallel]" [Total: 7, Pass: 7, Fail: 0, Flake: 0]
  • "[sig-node] Pod priority should match the default priorityClassName values system-node-critical [Suite:openshift/conformance/parallel]" [Total: 7, Pass: 7, Fail: 0, Flake: 0]

@CoreyCook8
Copy link
Copy Markdown
Author

👋 @ingvagabund @stbenjam I believe I have addressed all of the comments if you don't mind taking another look!

@benluddy
Copy link
Copy Markdown
Contributor

benluddy commented Oct 1, 2025

It seems like a good idea to have some end-to-end coverage of the priority admission plugin, but have you tried to submit these upstream first? It's unclear to me why we'd want to tie them to downstream.

My understanding is that the mirror pods of OpenShift's static pods do get admitted normally and have the numeric priority set, it's just that Kubelet acts based on the static pod itself rather than its mirror. Is that right?

@CoreyCook8
Copy link
Copy Markdown
Author

The purpose of these tests is to ensure that the priority-class-name does not diverge from the priority we are setting on the static pods. So, not necessarily that the priority admission plugin works, but rather the constant number we are assuming for these priority class names don't change and if they do change we catch that first in test.
Obviously, it's super unlikely that these values would change, but incase they do we are covered.

@benluddy
Copy link
Copy Markdown
Contributor

benluddy commented Oct 1, 2025

Will the mirror pods fail admission in that case? IIRC one of the behaviors of the priority admission plugin is to reject pods with both number and name set if the number doesn't match.

@bertinatto
Copy link
Copy Markdown
Member

bertinatto commented Oct 1, 2025

Will the mirror pods fail admission in that case? IIRC one of the behaviors of the priority admission plugin is to reject pods with both number and name set if the number doesn't match.

We could quickly test that by creating a proof PR similar to this one, setting the priority to a different value. If bootstrapping fails, then we don't need a test here, but it might still be worth proposing it upstream.

Edit: it seems like Ben is right: https://github.com/openshift/kubernetes/blob/72c39d96fb2f38c92a220bf269304e508ecaaac5/plugin/pkg/admission/priority/admission.go#L188

By the way, do we have a bug associated with this? Or is this proactively trying to avoid the problem described in the upstream issue?

@CoreyCook8
Copy link
Copy Markdown
Author

If that's the case, perhaps we don't need any new test for this?

And I have a support case with RH support. But the jist of the issue we have is for SNO openshift, ceph doesn't shut down correctly. We have graceful shutdown enabled, but once the apiserver/etcd go down, obviously all of the apiserver calls fail and things don't end up shutting down gracefully. Then, the host hangs while shutting down because of the leftover PVCs that never got properly unmounted. I believe it's a combination of apiserver calls / KCM state of the world that cause this, but I was able to do a POC in a dev environment that setting the priority explicitly prevents the static pods from shutting down early and allowed us to properly shut down.

@CoreyCook8
Copy link
Copy Markdown
Author

@benluddy @bertinatto What do you think? Should I close this PR?

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Oct 3, 2025

@CoreyCook8: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-azure 47ce8ab link false /test e2e-azure
ci/prow/e2e-gcp-ovn-techpreview 47ce8ab link false /test e2e-gcp-ovn-techpreview
ci/prow/e2e-aws-disruptive 47ce8ab link false /test e2e-aws-disruptive
ci/prow/e2e-metal-ipi-ovn-dualstack 47ce8ab link false /test e2e-metal-ipi-ovn-dualstack
ci/prow/e2e-aws-ovn-single-node-upgrade 9f18d74 link false /test e2e-aws-ovn-single-node-upgrade
ci/prow/e2e-aws-ovn-cgroupsv2 9f18d74 link false /test e2e-aws-ovn-cgroupsv2
ci/prow/e2e-aws-ovn-edge-zones 9f18d74 link false /test e2e-aws-ovn-edge-zones
ci/prow/e2e-aws-ovn-fips 9f18d74 link true /test e2e-aws-ovn-fips
ci/prow/e2e-vsphere-ovn-upi 9f18d74 link true /test e2e-vsphere-ovn-upi
ci/prow/e2e-aws-ovn-serial-2of2 9f18d74 link true /test e2e-aws-ovn-serial-2of2
ci/prow/e2e-vsphere-ovn 9f18d74 link true /test e2e-vsphere-ovn
ci/prow/e2e-aws-ovn-serial-1of2 9f18d74 link true /test e2e-aws-ovn-serial-1of2
ci/prow/e2e-aws-ovn-single-node 9f18d74 link false /test e2e-aws-ovn-single-node
ci/prow/okd-scos-e2e-aws-ovn 9f18d74 link false /test okd-scos-e2e-aws-ovn
ci/prow/e2e-aws-csi 9f18d74 link true /test e2e-aws-csi
ci/prow/e2e-gcp-csi 9f18d74 link true /test e2e-gcp-csi

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-trt
Copy link
Copy Markdown

openshift-trt Bot commented Oct 3, 2025

Job Failure Risk Analysis for sha: 9f18d74

Job Name Failure Risk
pull-ci-openshift-origin-main-e2e-aws-csi IncompleteTests
Tests for this run (25) are below the historical average (733): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-main-e2e-aws-ovn-cgroupsv2 IncompleteTests
Tests for this run (25) are below the historical average (1834): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-upgrade IncompleteTests
Tests for this run (28) are below the historical average (3728): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)

Risk analysis has seen new tests most likely introduced by this PR.
Please ensure that new tests meet guidelines for naming and stability.

New tests seen in this PR at sha: 9f18d74

  • "[sig-node] Pod priority should match the default priorityClassName values system-cluster-critical [Suite:openshift/conformance/parallel]" [Total: 11, Pass: 11, Fail: 0, Flake: 0]
  • "[sig-node] Pod priority should match the default priorityClassName values system-node-critical [Suite:openshift/conformance/parallel]" [Total: 11, Pass: 11, Fail: 0, Flake: 0]

@CoreyCook8
Copy link
Copy Markdown
Author

@benluddy @bertinatto Pinging here again. Any opinion on this?

@bertinatto
Copy link
Copy Markdown
Member

@benluddy @bertinatto Pinging here again. Any opinion on this?

IMO we don't this test because admission should fail if the pod contains a priority that's different from the one from the priority class.

setting the priority explicitly prevents the static pods from shutting down early and allowed us to properly shut down

It seems that support case wouldn’t have happened if we had a test asserting that the priority of control plane mirror pods was properly set. Now that you’ve fixed that, adding a test for that might be useful (but that's up to you).

@benluddy
Copy link
Copy Markdown
Contributor

benluddy commented Oct 8, 2025

priority of control plane mirror pods was properly set

I think the mirror pods were always setting the numeric priority via admission -- it's the static pod manifests themselves that needed it.

IMO, we don't need to maintain a test like this downstream, since part of the behavior of the priority admission plugin is to reject pod creation when the numeric priority and the priority class name don't match. If the upstream coverage is poor for that behavior, then it would benefit everyone to test it upstream.

@CoreyCook8
Copy link
Copy Markdown
Author

Okay I think we're on the same page here @benluddy @bertinatto
I'll close this.

We are still waiting for approvals on these to resolve the issue. @ingvagabund do you mind taking a look if you agree?
openshift/cluster-kube-controller-manager-operator#865
openshift/cluster-kube-scheduler-operator#572
I also think the four changes should be backported to supported openshift versions. Not sure how to kick off that process if anyone could help me out with that?

@CoreyCook8 CoreyCook8 closed this Oct 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ok-to-test Indicates a non-member PR verified by an org member that is safe to test.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants