Skip to content

Conversation

@djoshy
Copy link
Contributor

@djoshy djoshy commented Oct 16, 2025

- What I did

  • MCS applies a new annotation(machineconfiguration.openshift.io/firstPivotConfig) that stores the very first config that was served to the node.
  • MCC checks this annotation, sees if the MC served first to the node in question was for a custom pool, and applies the custom label to prevent it from getting queued for an update back to worker pool. It also applies another annotation(machineconfiguration.openshift.io/customPoolLabelsApplied) for bookkeeping purposes after this, so it doesn't attempt to label the node again if it was removed by some other actor.

- How to verify it

  • Create a cluster with this PR.
  • Create a custom MCP named infra:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  name: infra
spec:
  machineConfigSelector:
    matchExpressions:
      - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}
  nodeSelector:
    matchLabels:
      node-role.kubernetes.io/infra: ""
  • Make a copy one of your machinesets to disk. Make the following changes to it:
    - Edit its name to be unique.
    - Change the MachineSet's userDataSecret field to infra-user-data-managed if you used the infra MCP I provided in the previous step. If you used another MCP name, use $MCP_NAME-user-data-managed instead.
    - I also changed the cluster-api-machineset labels to match, but I am unsure if it is needed.
  • Apply your edited machineset to the cluster. You should a new node scale-up and join the cluster. It will appear to join the worker pool at first, but then get moved to the infra pool. No update/reboot should take place in this transition.
  • Observe the MCC logs:
I1121 17:53:36.778536       1 node_controller.go:1520] Node infra was booted into custom pool ci-ln-v5hp8gk-72292-lrqcw-infra-slr88; applying node selector labels: map[node-role.kubernetes.io/infra:]
I1121 17:53:36.806336       1 node_controller.go:1534] Successfully applied custom pool labels to node ci-ln-v5hp8gk-72292-lrqcw-infra-slr88
I1121 17:53:36.806493       1 node_controller.go:1451] Node ci-ln-v5hp8gk-72292-lrqcw-infra-slr88 was booted on custom pool infra; dropping from candidate list
  • You have booted a new node directly into your custom pool! 🎉

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 16, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 16, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 16, 2025
Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The general workflow sounds fine to me, definitely aligns with our general discussions around how to make this work.

The only real "functionality" we're adding is to have the node request additional labels via annotations, which should be safe I feel like since if the CSR went through to make it a node, we should already trust it.

I guess technically the workflow today probably sees users adding labels via machine/machineset objects, so at worst we'd duplicate that part? Either way it shouldn't cause an error

nodeAnnotations := map[string]string{
daemonconsts.CurrentMachineConfigAnnotationKey: conf,
daemonconsts.DesiredMachineConfigAnnotationKey: conf,
daemonconsts.FirstPivotMachineConfigAnnotationKey: conf,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, for some reason I forgot we did this, neat

@jlhuilier-1a
Copy link

jlhuilier-1a commented Oct 20, 2025

Hello, Just for information we started to use the managed user-data in our machineset definition (To get the correct final config at startup). It works however we rapidly faced issues with clusters that were created with previous version of openshift.
The issue is that today the VM image used is created only at install time with the corresponding unmanaged worker-user-data and master-user-data. Managed user-data on the other hand are evolving with OCP version and you end-up having ignition version unknown to the VM image (Hence machine stays as starting).

@djoshy
Copy link
Contributor Author

djoshy commented Oct 20, 2025

Hi @jlhuilier-1a ! You're correct, the reason you're running into that is because your boot images are likely out of date. The *-user-data stub needs to be compatible with the boot image referenced by your machineset. Some more context can be found here. The boot image update mechanism described in this document also attempts to upgrade these secrets to the newest version for the same reason.

As for this PR, the instructions described were just the easiest path to test 😄 You could also copy an existing user-data secret (say, call it infra-user-data) and then edit the MCS endpoint within the stub to target the infra pool. Then, edit the machineset to reference that instead; this should work in a similar fashion. When we do eventually bring this feature to GA, we will be sure to make a note of this in the workflow - thanks for bringing this up, appreciate your review!

@djoshy djoshy force-pushed the custom-pool-booting branch 4 times, most recently from 7977608 to 5939c02 Compare November 21, 2025 16:00
@djoshy djoshy changed the title WIP: PoC Custom pool booting NO-ISSUE: Implement custom pool booting Nov 21, 2025
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Nov 21, 2025
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request explicitly references no jira issue.

Details

In response to this:

Disclaimer: I mainly just wrote this for fun, to see if it was easy to do 😄 I might be missing certain use cases, but just wanted to open it up for wider review.

- What I did

  • MCS applies a new annotation(machineconfiguration.openshift.io/firstPivotConfig) that stores the very first config that was served to the node.
  • MCC checks this annotation, sees if the MC served first to the node in question was for a custom pool, and applies the custom label to prevent it from getting queued for an update back to worker pool. It also applies another annotation(machineconfiguration.openshift.io/customPoolLabelsApplied) for bookkeeping purposes after this, so it doesn't attempt to label the node again if it was removed by some other actor.

- How to verify it

  • Create a cluster with this PR.
  • Create a custom MCP named infra:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
 name: infra
spec:
 machineConfigSelector:
   matchExpressions:
     - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}
 nodeSelector:
   matchLabels:
     node-role.kubernetes.io/infra: ""
  • Make a copy one of your machinesets to disk. Make the following changes to it:
    - Edit its name to be unique.
    - Change the MachineSet's userDataSecret field to infra-user-data-managed if you used the infra MCP I provided in the previous step. If you used another MCP name, use $MCP_NAME-user-data-managed instead.
    - I also changed the cluster-api-machineset labels to match, but I am unsure if it is needed.
  • Apply your edited machineset to the cluster. You should a new node scale-up and join the cluster. It will appear to join the worker pool at first, but then get moved to the infra pool. No update/reboot should take place in this transition.
  • Observe the MCC logs:
I1016 19:26:21.404751       1 node_controller.go:677] Pool worker[zone=us-east4-c]: node djoshy-dev-101-nwpzl-worker-infra-kthf5: changed annotation machineconfiguration.openshift.io/reason = 
I1016 19:26:40.081200       1 node_controller.go:677] Pool worker[zone=us-east4-c]: node djoshy-dev-101-nwpzl-worker-infra-kthf5: Reporting ready
I1016 19:26:40.131083       1 node_controller.go:677] Pool worker[zone=us-east4-c]: node djoshy-dev-101-nwpzl-worker-infra-kthf5: changed taints
I1016 19:26:41.565210       1 node_controller.go:677] Pool worker[zone=us-east4-c]: node djoshy-dev-101-nwpzl-worker-infra-kthf5: changed taints
I1016 19:26:45.082788       1 node_controller.go:1348] Pool worker: selected candidate node djoshy-dev-101-nwpzl-worker-infra-kthf5
I1016 19:26:45.082843       1 node_controller.go:667] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1
I1016 19:26:45.082852       1 node_controller.go:1486] Applying node selector labels from custom pool infra to node djoshy-dev-101-nwpzl-worker-infra-kthf5: map[node-role.kubernetes.io/infra:]
I1016 19:26:45.096179       1 node_controller.go:1500] Successfully applied custom pool labels to node djoshy-dev-101-nwpzl-worker-infra-kthf5
I1016 19:26:45.096220       1 node_controller.go:1417] node djoshy-dev-101-nwpzl-worker-infra-kthf5 has been moved to pool infra, dropping from candidate list
  • Profit!!! 💰 You have booted a new node directly into your custom pool! 🎉

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@djoshy djoshy changed the title NO-ISSUE: Implement custom pool booting MCO-650: Implement custom pool booting Nov 24, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Nov 24, 2025

@djoshy: This pull request references MCO-650 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

- What I did

  • MCS applies a new annotation(machineconfiguration.openshift.io/firstPivotConfig) that stores the very first config that was served to the node.
  • MCC checks this annotation, sees if the MC served first to the node in question was for a custom pool, and applies the custom label to prevent it from getting queued for an update back to worker pool. It also applies another annotation(machineconfiguration.openshift.io/customPoolLabelsApplied) for bookkeeping purposes after this, so it doesn't attempt to label the node again if it was removed by some other actor.

- How to verify it

  • Create a cluster with this PR.
  • Create a custom MCP named infra:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
 name: infra
spec:
 machineConfigSelector:
   matchExpressions:
     - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}
 nodeSelector:
   matchLabels:
     node-role.kubernetes.io/infra: ""
  • Make a copy one of your machinesets to disk. Make the following changes to it:
    - Edit its name to be unique.
    - Change the MachineSet's userDataSecret field to infra-user-data-managed if you used the infra MCP I provided in the previous step. If you used another MCP name, use $MCP_NAME-user-data-managed instead.
    - I also changed the cluster-api-machineset labels to match, but I am unsure if it is needed.
  • Apply your edited machineset to the cluster. You should a new node scale-up and join the cluster. It will appear to join the worker pool at first, but then get moved to the infra pool. No update/reboot should take place in this transition.
  • Observe the MCC logs:
I1121 17:53:36.778536       1 node_controller.go:1520] Node infra was booted into custom pool ci-ln-v5hp8gk-72292-lrqcw-infra-slr88; applying node selector labels: map[node-role.kubernetes.io/infra:]
I1121 17:53:36.806336       1 node_controller.go:1534] Successfully applied custom pool labels to node ci-ln-v5hp8gk-72292-lrqcw-infra-slr88
I1121 17:53:36.806493       1 node_controller.go:1451] Node ci-ln-v5hp8gk-72292-lrqcw-infra-slr88 was booted on custom pool infra; dropping from candidate list
  • You have booted a new node directly into your custom pool! 🎉

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@djoshy djoshy marked this pull request as ready for review November 24, 2025 19:58
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 24, 2025
@djoshy djoshy force-pushed the custom-pool-booting branch from 5939c02 to 4892de8 Compare November 25, 2025 15:16
@djoshy
Copy link
Contributor Author

djoshy commented Nov 25, 2025

/retest-required


klog.Infof("Node %s was booted into custom pool %s; applying node selector labels: %v", poolName, node.Name, labelsToApply)

// Apply the labels to the node and add annotation indicating custom pool labels were applied
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question for my curiosity, is there any concern around users deleting the annotation key or changing it's value? (I'd hope they wouldn't, but anything's possible)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, anything's possible - a user could potentially mess with this annotation to cause strange behavior(but nothing we can't undo). The same could apply to most of our MCO sided annotations, though. I guess I'd hope its an informed user since a good bit of this workflow is manual(new machineset & set up secret) 😅

Copy link
Member

@isabella-janssen isabella-janssen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Code looks clean to me & the added units test look to cover the appropriate cases and are passing.

As a Jira nit, I'd imagine this will need some doc updates? If so, can you add the mco_doc_required label to track that need, please?

@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. and removed lgtm Indicates that a PR is ready to be merged. labels Nov 26, 2025
@djoshy
Copy link
Contributor Author

djoshy commented Nov 26, 2025

As a Jira nit, I'd imagine this will need some doc updates? If so, can you add the mco_doc_required label to track that need, please?

Done, thanks for the callout!

Also added an e2e to the disruptive suite in the last commit, PTAL when you have a sec. It's not merge-ready since I'll want to rebase on helpers from #5391, but just wanted to get the flow figured out.

@djoshy
Copy link
Contributor Author

djoshy commented Nov 26, 2025

/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive-techpreview-1of2 periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive-techpreview-2of2

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 26, 2025

@djoshy: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive-techpreview-1of2
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive-techpreview-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/961b9040-cb01-11f0-9b98-8f7d91bc0c87-0

o.Expect(err).NotTo(o.HaveOccurred())
})

g.AfterEach(func(ctx context.Context) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of handling the cleanup this way & I think the comments in here are nicely written. 🤩

@isabella-janssen
Copy link
Member

Also added an e2e to the disruptive suite in the last commit, PTAL when you have a sec. It's not merge-ready since I'll want to rebase on helpers from #5391, but just wanted to get the flow figured out.

Initial read through of the tests looks good to me. Once the you rebase with the helpers I'll take a closer look.

@djoshy djoshy force-pushed the custom-pool-booting branch from ecb45d3 to 63898f5 Compare December 1, 2025 18:19
@djoshy
Copy link
Contributor Author

djoshy commented Dec 1, 2025

/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive-techpreview-2of2

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 1, 2025

@djoshy: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive-techpreview-2of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/54dcc290-cee2-11f0-8059-14dd4cd0f9c9-0

@djoshy djoshy force-pushed the custom-pool-booting branch from 63898f5 to ca839ba Compare December 1, 2025 19:46
@djoshy
Copy link
Contributor Author

djoshy commented Dec 2, 2025

/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive-techpreview-2of2
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-azure-mco-disruptive-techpreview-2of2
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-vsphere-mco-disruptive-techpreview-2of2
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-gcp-mco-disruptive-techpreview-2of2

/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-azure-mco-disruptive
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-vsphere-mco-disruptive
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-gcp-mco-disruptive
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv4-mco-disruptive
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv6-mco-disruptive
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-dualstack-mco-disruptive

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 2, 2025

@djoshy: trigger 11 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive-techpreview-2of2
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-azure-mco-disruptive-techpreview-2of2
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-vsphere-mco-disruptive-techpreview-2of2
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-gcp-mco-disruptive-techpreview-2of2
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-aws-mco-disruptive
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-azure-mco-disruptive
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-vsphere-mco-disruptive
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-gcp-mco-disruptive
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv4-mco-disruptive
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv6-mco-disruptive
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-dualstack-mco-disruptive

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/95786450-cf94-11f0-8330-730ebea93444-0

@djoshy
Copy link
Contributor Author

djoshy commented Dec 3, 2025

Disruptive runs look good, except for metal jobs. It seems like there is some sort of validation that makes the userDataSecret field immutable:

{  fail [github.com/openshift/machine-config-operator/test/extended/custom_pool_booting.go:130]: Failed to patch user-data-secret in machineset
Unexpected error:
    <*util.ExitError | 0xc00011ef30>: 
    exit status 1
    {
        Cmd: "oc --namespace=e2e-test-custom-pool-booting-l7fhg --kubeconfig=/tmp/secret/kubeconfig patch machinesets.machine.openshift.io infra-custom-pool-test -p [{\"op\": \"replace\", \"path\": \"/spec/template/spec/providerSpec/value/userDataSecret/name\", \"value\": \"infra-user-data-managed\"}] -n openshift-machine-api --type=json",
        StdErr: "The request is invalid: the server rejected our request due to an error in our request",

These sort of errors are typical for a webhook - the solution here might be to have an alternative flow for the metal platform for this specific step. I had hoped the JSON patch would've saved us some trouble by making this step platform agnostic, but oh well, we got 90% of the way there 😄

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 4, 2025
Nodes that boot with a custom pool's rendered MachineConfig were being incorrectly moved to the worker pool after scale-up because they lacked the pool's node selector labels. This change detects such nodes via their first pivot MachineConfig and applies the appropriate  labels.
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 4, 2025

@djoshy: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-upgrade e8e0474 link true /test e2e-aws-ovn-upgrade
ci/prow/e2e-gcp-op-1of2 e8e0474 link true /test e2e-gcp-op-1of2
ci/prow/unit e8e0474 link true /test unit
ci/prow/images e8e0474 link true /test images
ci/prow/okd-scos-images e8e0474 link true /test okd-scos-images
ci/prow/verify e8e0474 link true /test verify
ci/prow/verify-deps e8e0474 link true /test verify-deps
ci/prow/security e8e0474 link false /test security
ci/prow/e2e-hypershift e8e0474 link true /test e2e-hypershift
ci/prow/e2e-gcp-op-single-node e8e0474 link true /test e2e-gcp-op-single-node
ci/prow/e2e-aws-ovn e8e0474 link true /test e2e-aws-ovn
ci/prow/bootstrap-unit e8e0474 link false /test bootstrap-unit
ci/prow/e2e-gcp-op-2of2 e8e0474 link true /test e2e-gcp-op-2of2

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@djoshy djoshy force-pushed the custom-pool-booting branch from e8e0474 to 2bfe5d7 Compare December 4, 2025 15:34
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 4, 2025
@djoshy
Copy link
Contributor Author

djoshy commented Dec 4, 2025

/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv4-mco-disruptive

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 4, 2025

@djoshy: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv4-mco-disruptive

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/ff547460-d127-11f0-8ec0-171c403bfd7e-0

@djoshy djoshy force-pushed the custom-pool-booting branch from 2bfe5d7 to 2030623 Compare December 5, 2025 15:53
@djoshy
Copy link
Contributor Author

djoshy commented Dec 5, 2025

/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv4-mco-disruptive
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-gcp-mco-disruptive

Added a skip for the metal cases, since scaling them isn't currently automated in CI

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 5, 2025

@djoshy: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv4-mco-disruptive
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-gcp-mco-disruptive

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/bf9020b0-d1f2-11f0-9b83-bf8c8d4b2b07-0

@djoshy
Copy link
Contributor Author

djoshy commented Dec 8, 2025

/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv4-mco-disruptive
/payload-job periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-gcp-mco-disruptive

PIS tests don't look happy but I don't think I touched any of their test code in the last commit 🤔

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 8, 2025

@djoshy: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-metal-ipi-ovn-ipv4-mco-disruptive
  • periodic-ci-openshift-machine-config-operator-release-4.21-periodics-e2e-gcp-mco-disruptive

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c55d78a0-d443-11f0-8410-e434b5567f3d-0

Copy link
Member

@isabella-janssen isabella-janssen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

All previous conversations were addressed new test looks to be passing.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 15, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 15, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: djoshy, isabella-janssen, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [djoshy,isabella-janssen,yuqi-zhang]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants