Skip to content

NE-2096: Bump to OSSM 3.1.0 and Istio 1.26.2#1257

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
Miciah:NE-2096-bump-to-OSSM-3.1.0
Aug 11, 2025
Merged

NE-2096: Bump to OSSM 3.1.0 and Istio 1.26.2#1257
openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
Miciah:NE-2096-bump-to-OSSM-3.1.0

Conversation

@Miciah
Copy link
Contributor

@Miciah Miciah commented Aug 5, 2025

Bump from OSSM v3.0.1 to v3.1.0 and from Istio v1.24.4 to v1.26.2.

This commit resolves NE-2096.

https://issues.redhat.com/browse/NE-2096

* cmd/ingress-operator/start.go (defaultGatewayAPIOperatorVersion)
(defaultIstioVersion):
* manifests/02-deployment-ibm-cloud-managed.yaml
(GATEWAY_API_OPERATOR_VERSION, ISTIO_VERSION):
* manifests/02-deployment.yaml
(GATEWAY_API_OPERATOR_VERSION, ISTIO_VERSION):
* pkg/operator/controller/gatewayclass/istio.go (desiredIstio):
Bump from OSSM v3.0.1 to v3.1.0 and from Istio v1.24.4 to v1.26.2.
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Aug 5, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Aug 5, 2025

@Miciah: This pull request references NE-2096 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.20.0" version, but no target version was set.

Details

In response to this:

Bump from OSSM v3.0.1 to v3.1.0 and from Istio v1.24.4 to v1.26.2.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from gcs278 and grzpiotrowski August 5, 2025 19:20
@lihongan
Copy link
Contributor

lihongan commented Aug 6, 2025

/retest

@lihongan
Copy link
Contributor

lihongan commented Aug 6, 2025

/payload-job periodic-ci-openshift-hypershift-release-4.20-periodics-e2e-azure-aks-ovn-conformance

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 6, 2025

@lihongan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-hypershift-release-4.20-periodics-e2e-azure-aks-ovn-conformance

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/d8e200c0-729c-11f0-8a74-3ec8c4db82af-0

@lihongan
Copy link
Contributor

lihongan commented Aug 6, 2025

/payload-job periodic-ci-openshift-multiarch-master-nightly-4.20-ocp-e2e-aws-ovn-arm64

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 6, 2025

@lihongan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-multiarch-master-nightly-4.20-ocp-e2e-aws-ovn-arm64

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c6f72410-729e-11f0-94d5-1d9c0f755feb-0

@lihongan
Copy link
Contributor

lihongan commented Aug 6, 2025

/payload-job periodic-ci-openshift-release-master-ci-4.20-e2e-gcp-ovn-xpn

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 6, 2025

@lihongan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.20-e2e-gcp-ovn-xpn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/82e64bf0-72a0-11f0-93a8-2ad5d629f8df-0

@lihongan
Copy link
Contributor

lihongan commented Aug 6, 2025

Pre-merge tested and no issues found

$ oc get clusterversion
NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.20.0-0-2025-08-06-064912-test-ci-ln-tbdcdyt-latest   True        False         89m     Cluster version is 4.20.0-0-2025-08-06-064912-test-ci-ln-tbdcdyt-latest

$ oc -n openshift-operators get installplan
NAME            CSV                           APPROVAL   APPROVED
install-tm4tb   servicemeshoperator3.v3.1.0   Manual     true

$ oc -n openshift-operators get csv
NAME                          DISPLAY                            VERSION   REPLACES                      PHASE
servicemeshoperator3.v3.1.0   Red Hat OpenShift Service Mesh 3   3.1.0     servicemeshoperator3.v3.0.3   Succeeded

$ oc get istio
NAME                NAMESPACE           PROFILE   REVISIONS   READY   IN USE   ACTIVE REVISION     STATUS    VERSION   AGE
openshift-gateway   openshift-ingress             1           1       1        openshift-gateway   Healthy   v1.26.2   4m3s

// create referencepool CRD
$ kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/releases/latest/download/manifests.yaml
customresourcedefinition.apiextensions.k8s.io/inferencemodels.inference.networking.x-k8s.io created
customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.x-k8s.io created

$ oc get istio openshift-gateway -oyaml | grep -i inference -B1
      env:
        ENABLE_GATEWAY_API_INFERENCE_EXTENSION: "true"

$ oc -n openshift-ingress get deployment istiod-openshift-gateway -oyaml | grep -i inference -A1
        - name: ENABLE_GATEWAY_API_INFERENCE_EXTENSION
          value: "true"

// after deleting the referencepool crd, the "ENABLE_GATEWAY_API_INFERENCE_EXTENSION" is remvoed

@candita
Copy link
Contributor

candita commented Aug 6, 2025

/assign @alebedev87

Copy link
Contributor

@alebedev87 alebedev87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 7, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 7, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alebedev87

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 7, 2025
@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD c2c37ca and 2 for PR HEAD c236d0c in total

@alebedev87
Copy link
Contributor

alebedev87 commented Aug 7, 2025

From the Slack thread about the upgrade path to v3.1.0:

v3.1.0 version added skipRange setting to enable a fast forward upgrade from any v3.0.z version directly to v3.1.0 (Slack thread). This means that for any OCP 4.19.z cluster which have v3.0.0 or v3.0.1 OSSM version (only two versions we shipped) the upgrade to v3.1.0 does not need any upgrade logic. Similar to the bump of v3.0.1 which we backported recently. This gives us an option to go ahead and backport the v3.1.0 bump PR to release-4.19 without any code changes.

/cherry-pick release-4.19

@alebedev87
Copy link
Contributor

/cherry-pick release-4.19

@openshift-cherrypick-robot

@alebedev87: once the present PR merges, I will cherry-pick it on top of release-4.19 in a new PR and assign it to you.

Details

In response to this:

/cherry-pick release-4.19

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@alebedev87
Copy link
Contributor

Some firewall kind of problems while sending http requests from CI clusters to OCP:

http://34.61.34.57": dial tcp 34.61.34.57:80: connect: connection refused
dial tcp 35.223.8.27:80: connect: connection refused
Get "http://34.41.75.69": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Get "http://35.223.8.27": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

/retest-required

@lihongan
Copy link
Contributor

lihongan commented Aug 8, 2025

/retest-required

1 similar comment
@lihongan
Copy link
Contributor

lihongan commented Aug 8, 2025

/retest-required

@alebedev87
Copy link
Contributor

alebedev87 commented Aug 8, 2025

FAIL: TestAll/parallel/Test_IdleConnectionTerminationPolicyImmediate

This test exercises HAProxy idle-close-on-response option. I don't see how this PR can impact it plus this test already passed, here we are dealing with retesting of the bot (due to a target branch move). However I'm going to spin a cluster and test it locally. Meanwhile a retest...

/test e2e-gcp-operator

@alebedev87
Copy link
Contributor

Install failed.

/test e2e-gcp-operator

@alebedev87
Copy link
Contributor

alebedev87 commented Aug 8, 2025

Test_IdleConnectionTerminationPolicyImmediate tested manually, no failures for 5 consecutive runs:

$ git clone https://github.com/Miciah/cluster-ingress-operator.git
$ git co NE-2096-bump-to-OSSM-3.1.0
$ podman build -t quay.io/alebedev/cluster-ingress-operator:8.8.112-ossm310 .
$ podman push quay.io/alebedev/cluster-ingress-operator:8.8.112-ossm310
$ cvodown
Warning: spec.template.spec.nodeSelector[node-role.kubernetes.io/master]: use "node-role.kubernetes.io/control-plane" instead
deployment.apps/cluster-version-operator scaled
$ oc -n openshift-ingress-operator edit deploy
Warning: spec.template.spec.nodeSelector[node-role.kubernetes.io/master]: use "node-role.kubernetes.io/control-plane" instead
deployment.apps/ingress-operator edited
$ TEST=Test_IdleConnectionTerminationPolicyImmediate make test-e2e | tee run1
go generate ./pkg/manifests
CGO_ENABLED=1 GO111MODULE=on GOFLAGS=-mod=vendor go test -timeout 1.5h -count 1 -v -tags e2e -run "Test_IdleConnectionTerminationPolicyImmediate" ./test/e2e
=== RUN   Test_IdleConnectionTerminationPolicyImmediate
=== PAUSE Test_IdleConnectionTerminationPolicyImmediate
=== CONT  Test_IdleConnectionTerminationPolicyImmediate
    idle_connection_test.go:547: Creating namespace "idle-connection-close-immediate-2jgxw"...
    idle_connection_test.go:547: Waiting for ServiceAccount idle-connection-close-immediate-2jgxw/default to be provisioned...
    idle_connection_test.go:547: Waiting for RoleBinding idle-connection-close-immediate-2jgxw/system:image-pullers to be created...
    idle_connection_test.go:547: Creating IngressController openshift-ingress-operator/idle-connection-close-immediate-2jgxw...
    util_test.go:694: waiting for loadbalancer domain a3e551fa4d2484156ae114e0ba9082e8-506901531.us-east-2.elb.amazonaws.com to resolve...
    util_test.go:694: waiting for loadbalancer domain a3e551fa4d2484156ae114e0ba9082e8-506901531.us-east-2.elb.amazonaws.com to resolve...
    util_test.go:694: waiting for loadbalancer domain a3e551fa4d2484156ae114e0ba9082e8-506901531.us-east-2.elb.amazonaws.com to resolve...
    util_test.go:694: waiting for loadbalancer domain a3e551fa4d2484156ae114e0ba9082e8-506901531.us-east-2.elb.amazonaws.com to resolve...
    util_test.go:694: waiting for loadbalancer domain a3e551fa4d2484156ae114e0ba9082e8-506901531.us-east-2.elb.amazonaws.com to resolve...
    util_test.go:694: waiting for loadbalancer domain a3e551fa4d2484156ae114e0ba9082e8-506901531.us-east-2.elb.amazonaws.com to resolve...
    util_test.go:714: verified connectivity with workload with req http://a3e551fa4d2484156ae114e0ba9082e8-506901531.us-east-2.elb.amazonaws.com and response 200
    operator_test.go:4269: pod idle-connection-close-immediate-2jgxw/web-service-1 not ready
    operator_test.go:4269: pod idle-connection-close-immediate-2jgxw/web-service-1 not ready
    operator_test.go:4269: pod idle-connection-close-immediate-2jgxw/web-service-1 not ready
    operator_test.go:4269: pod idle-connection-close-immediate-2jgxw/web-service-1 not ready
    operator_test.go:4269: pod idle-connection-close-immediate-2jgxw/web-service-2 not ready
    operator_test.go:4269: pod idle-connection-close-immediate-2jgxw/web-service-2 not ready
    operator_test.go:4269: pod idle-connection-close-immediate-2jgxw/web-service-2 not ready
    idle_connection_test.go:547: step 1: Verify the initial response is correctly served by web-service-1
    idle_connection_test.go:551: [192.168.1.27:56828 -> 3.146.171.80:80] Req: URL=http://3.146.171.80, Host=test-idle-connection-close-immediate-2jgxw.apps.ci-ln-il9v9s2-76ef8.aws-2.ci.openshift.org
    idle_connection_test.go:551: [192.168.1.27:56828 <- 3.146.171.80:80] Res: Status=200, Headers=map[Content-Length:[8] Content-Type:[text/plain; charset=utf-8] Date:[Fri, 08 Aug 2025 10:34:19 GMT] Set-Cookie:[f8d11ab63b4ef3906a11f4953d8d2645=07ac3e5b94fa5cf79dd58dbeb7b9d014; path=/; HttpOnly] X-Pod-Name:[web-service-1] X-Pod-Namespace:[unknown-namespace]]
    idle_connection_test.go:547: step 2: Switch route to web-service-2 and verify Immediate policy ensures new responses are served by web-service-2
    idle_connection_test.go:561: [192.168.1.27:50370 -> 3.146.171.80:80] Req: URL=http://3.146.171.80, Host=test-idle-connection-close-immediate-2jgxw.apps.ci-ln-il9v9s2-76ef8.aws-2.ci.openshift.org
    idle_connection_test.go:561: [192.168.1.27:50370 <- 3.146.171.80:80] Res: Status=200, Headers=map[Content-Length:[8] Content-Type:[text/plain; charset=utf-8] Date:[Fri, 08 Aug 2025 10:34:40 GMT] Set-Cookie:[f8d11ab63b4ef3906a11f4953d8d2645=ce5d964c57a9d2d184d874479492cb07; path=/; HttpOnly] X-Pod-Name:[web-service-2] X-Pod-Namespace:[unknown-namespace]]
    idle_connection_test.go:547: step 3: Ensure subsequent responses are served by web-service-2
    idle_connection_test.go:568: [192.168.1.27:50370 -> 3.146.171.80:80] Req: URL=http://3.146.171.80, Host=test-idle-connection-close-immediate-2jgxw.apps.ci-ln-il9v9s2-76ef8.aws-2.ci.openshift.org
    idle_connection_test.go:568: [192.168.1.27:50370 <- 3.146.171.80:80] Res: Status=200, Headers=map[Content-Length:[8] Content-Type:[text/plain; charset=utf-8] Date:[Fri, 08 Aug 2025 10:34:40 GMT] Set-Cookie:[f8d11ab63b4ef3906a11f4953d8d2645=ce5d964c57a9d2d184d874479492cb07; path=/; HttpOnly] X-Pod-Name:[web-service-2] X-Pod-Namespace:[unknown-namespace]]
    idle_connection_test.go:399: deleted ingresscontroller idle-connection-close-immediate-2jgxw
    util_test.go:953: Dumping events in namespace "idle-connection-close-immediate-2jgxw"...
    util_test.go:957: Deleting namespace "idle-connection-close-immediate-2jgxw"...
--- PASS: Test_IdleConnectionTerminationPolicyImmediate (200.76s)
PASS
ok  	github.com/openshift/cluster-ingress-operator/test/e2e	201.696s

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 0c57689 and 1 for PR HEAD c236d0c in total

@alebedev87
Copy link
Contributor

/retest-required

1 similar comment
@alebedev87
Copy link
Contributor

/retest-required

@alebedev87
Copy link
Contributor

/test e2e-aws-ovn-hypershift-conformance

@alebedev87
Copy link
Contributor

Test echo pod got deleted before HTTPRoute connectivity started (namespace events show that the pod lived for 22 seconds):

    util_gatewayapi_test.go:907: GET test-hostname-t64lk.gws.ci-op-f2l3wlbj-43abb.origin-ci-int-aws.dev.rhcloud.com failed: status 503, expected 200, retrying...
    util_gatewayapi_test.go:915: Response headers for most recent request: map[Content-Length:[19] Content-Type:[text/plain] Date:[Sat, 09 Aug 2025 23:21:08 GMT]]
    util_gatewayapi_test.go:916: Reponse body for most recent request: no healthy upstream
    util_gatewayapi_test.go:918: Error connecting to test-hostname-t64lk.gws.ci-op-f2l3wlbj-43abb.origin-ci-int-aws.dev.rhcloud.com: context deadline exceeded
    util_test.go:953: Dumping events in namespace "test-e2e-gwapi-g4fzc"...
    util_test.go:955: 0001-01-01 00:00:00 +0000 UTC { } Pod test-gateway-openshift-default Scheduled Successfully assigned test-e2e-gwapi-g4fzc/test-gateway-openshift-default to ip-10-0-94-167.us-east-2.compute.internal
    util_test.go:955: 2025-08-09 23:14:18 +0000 UTC {multus } Pod test-gateway-openshift-default AddedInterface Add eth0 [10.131.0.27/23] from ovn-kubernetes
    util_test.go:955: 2025-08-09 23:14:18 +0000 UTC {kubelet ip-10-0-94-167.us-east-2.compute.internal} Pod test-gateway-openshift-default Pulling Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest"
    util_test.go:955: 2025-08-09 23:14:19 +0000 UTC {kubelet ip-10-0-94-167.us-east-2.compute.internal} Pod test-gateway-openshift-default Pulled Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/tools:latest" in 428ms (428ms including waiting). Image size: 891467815 bytes.
    util_test.go:955: 2025-08-09 23:14:19 +0000 UTC {kubelet ip-10-0-94-167.us-east-2.compute.internal} Pod test-gateway-openshift-default Created Created container: echo
    util_test.go:955: 2025-08-09 23:14:19 +0000 UTC {kubelet ip-10-0-94-167.us-east-2.compute.internal} Pod test-gateway-openshift-default Started Started container echo
    util_test.go:955: 2025-08-09 23:14:41 +0000 UTC {kubelet ip-10-0-94-167.us-east-2.compute.internal} Pod test-gateway-openshift-default Killing Stopping container echo

PR to better tolerate pod deletions/evictions: #1262.

/test e2e-aws-operator

@alebedev87
Copy link
Contributor

/test e2e-aws-operator

@alebedev87
Copy link
Contributor

Same root cause as described here, same fix.

/test e2e-aws-operator

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 0c57689 and 2 for PR HEAD c236d0c in total

@Miciah
Copy link
Contributor Author

Miciah commented Aug 11, 2025

e2e-aws-operator failed because TestGatewayAPI/testGatewayAPIResourcesProtection/Pod_binding_required and TestConnectTimeout failed.

This is the second time I observe TestGatewayAPI/testGatewayAPIResourcesProtection/Pod_binding_required has failed, so I have I filed OCPBUGS-60302 to track the issue.

The issue with TestConnectTimeout is already tracked by OCPBUGS-59249.

/test e2e-aws-operator

@alebedev87
Copy link
Contributor

alebedev87 commented Aug 11, 2025

e2e-aws-operator failed because TestGatewayAPI/testGatewayAPIResourcesProtection/Pod_binding_required

Right, this is similar to this failure but this time VAP didn't come up in time (CVO managed to get up though). I'm going to mitigate both of these flakes in this PR's commit.

@Miciah
Copy link
Contributor Author

Miciah commented Aug 11, 2025

e2e-aws-operator failed because TestGatewayAPI/testGatewayAPIResourcesProtection/Pod_binding_required

Right, this is similar to this failure but this time VAP didn't come up in time (CVO managed to get up though). I'm going to mitigate both of these flakes in this PR's commit.

In the case of the the Pod_binding_required failure, the test failed to connect to the API server:

=== RUN   TestAll/serial/TestGatewayAPI/testGatewayAPIResourcesProtection/Pod_binding_required
    gateway_api_test.go:401: failed to verify VAP protection for creating gateway API CRD "gateways.gateway.networking.k8s.io": unexpected error received while creating CRD "gateways.gateway.networking.k8s.io": Post "https://api.ci-op-f2l3wlbj-43abb.origin-ci-int-aws.dev.rhcloud.com:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions": read tcp 10.128.220.16:47032->3.143.193.91:6443: read: connection reset by peer

This failure comes from this code:

if err := wait.PollUntilContextTimeout(context.Background(), 2*time.Second, 30*time.Second, false, func(ctx context.Context) (bool, error) {
if err := tc.kclient.Create(ctx, testCRDs[i]); err != nil {
if kerrors.IsAlreadyExists(err) {
// VAP was disabled and re-enabled at the beginning of the test.
// It may take some time for the API server to process this change and register the VAP.
// As a result, we might encounter a "CRD X already exists" error.
// To handle this, we allow the API server some time to catch up.
t.Logf("Failed to create CRD %q: %v; retrying...", testCRDs[i].Name, err)
return false, nil
}
if !strings.Contains(err.Error(), tc.expectedErrMsg) {
return false, fmt.Errorf("unexpected error received while creating CRD %q: %v", testCRDs[i].Name, err)
}
return true, nil
}
return false, fmt.Errorf("admission error is expected while creating CRD %q but not received", testCRDs[i].Name)
}); err != nil {
t.Errorf("failed to verify VAP protection for creating gateway API CRD %q: %v", testCRDs[i].Name, err)

That doesn't seem like an issue with the VAP or with the code that 7bb7c34 modifies; rather, it seems like a typical API blip that can cause flakiness when the test immediately fails on an API call rather than retrying.

@alebedev87
Copy link
Contributor

That doesn't seem like an issue with the VAP or with the code that 7bb7c34 modifies; rather, it seems like a typical API blip that can cause flakiness when the test immediately fails on an API call rather than retrying.

Right, this type of problem is out of reach. The timeout increase can still be helpful though. 1 minute we're using right now may be too close to the boarderline.

@alebedev87
Copy link
Contributor

That doesn't seem like an issue with the VAP or with the code that 7bb7c34 modifies; rather, it seems like a typical API blip that can cause flakiness when the test immediately fails on an API call rather than retrying.

Increased this timeout too (5647a21).

@Miciah
Copy link
Contributor Author

Miciah commented Aug 11, 2025

That doesn't seem like an issue with the VAP or with the code that 7bb7c34 modifies; rather, it seems like a typical API blip that can cause flakiness when the test immediately fails on an API call rather than retrying.

Increased this timeout too (5647a21).

The issue isn't the timeout; it's that the test immediately signals an error if the client gets a connection failure:

if !strings.Contains(err.Error(), tc.expectedErrMsg) {
return false, fmt.Errorf("unexpected error received while creating CRD %q: %v", testCRDs[i].Name, err)

@alebedev87
Copy link
Contributor

The issue isn't the timeout; it's that the test immediately signals an error if the client gets a connection failure

Right, the idea was to stop the polling loop if we got an error different from the one coming from VAP. Retrying on a different error message would be kinda similar to saying "VAP is not working but it will eventually give your an admission error". However, there can be some connectivity errors like we see in here, let me think about how to approach this.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 11, 2025

@Miciah: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-techpreview c236d0c link false /test e2e-aws-ovn-techpreview

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@Miciah
Copy link
Contributor Author

Miciah commented Aug 11, 2025

/override ci/prow/e2e-aws-operator

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 11, 2025

@Miciah: Overrode contexts on behalf of Miciah: ci/prow/e2e-aws-operator

Details

In response to this:

/override ci/prow/e2e-aws-operator

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-bot openshift-merge-bot bot merged commit 211a843 into openshift:master Aug 11, 2025
21 of 22 checks passed
@openshift-cherrypick-robot

@alebedev87: new pull request created: #1264

Details

In response to this:

/cherry-pick release-4.19

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-cluster-ingress-operator
This PR has been included in build ose-cluster-ingress-operator-container-v4.20.0-202508111916.p0.g211a843.assembly.stream.el9.
All builds following this will include this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants

Comments