Skip to content

Conversation

@trozet
Copy link
Contributor

@trozet trozet commented Jan 9, 2025

πŸ“‘ Description

Fixes #

Additional Information for reviewers

βœ… Checks

  • My code requires changes to the documentation
  • if so, I have updated the documentation as required
  • My code requires tests
  • if so, I have added and/or updated the tests as required
  • All the tests have passed in the CI

How to verify it

npinaeva and others added 30 commits December 18, 2024 20:23
Handle host-network pods as default network.
Don't return per-pod errors on startup.
Remove nadController from UDNHostIsolationManager as we don't use it
anymore to find pod's UDN based on NADs that exist in the namespace.

Signed-off-by: Nadia Pinaeva <n.m.pinaeva@gmail.com>
Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
…face

Signed-off-by: Martin Kennelly <mkennell@redhat.com>
This code isnt being used anymore. We dont expect users
to upgrade directly from code which contained the legacy LRPs,
therefore its safe to remove.

Signed-off-by: Martin Kennelly <mkennell@redhat.com>
Signed-off-by: Martin Kennelly <mkennell@redhat.com>
L2 UDN: EgressIP hosted by primary interface (`breth0`)
If EncapIP is configured, it means it is different from the node's
primary address. Do not update EncapIP when node's primary address
changes.

Signed-off-by: Yun Zhou <yunz@nvidia.com>
Assign network ID from network manager running in cluster manager. The
network ID is included in NetInfo and annotated on the NAD along with
the network name. Network managers running in zone & node controllers
will read the network ID from the annotation to set it on NetInfo.

On startup, network manager running in cluster manager will read the
network IDs annotated on the nodes to cover for the upgrade scenario.
Network IDs will still be annotated on the nodes because this PR does
not transition all the code to use the network ID from the NetInfo
instead of the node annotation. That will have to be done progressively.

This have several benefits, among them:
- NetworkID is available sooner overall since we dont have to wait for
  all the nodes to be annotated
- No need to unmarshall the node annotation to get the network IDs, they
  are available in NetInfo
- No need to unmashall the NAD to get the network name, can be accessed
  directly from the annotation.

If a network is replaced with a different one with the same name, the
network ID is reused as the respective network controller will not start
as the previous one is stopped and cleaned up so it shouldn't be a
problem.

Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Instead of considering managed VRFs those that follow the mp<id>-udn-vrf
naming template, use the table number: those vrfs associated to a table
within our reserved block of table numbers are managed by us. The block
right now is anything higher than RoutingTableIDStart (1000). This
allows to manage VRFs with any name which is desirable if the name is
going to be exposed through BGP.

Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Anticipating that these VRF names are going to be exposed through BGP,
we should to use friendlier names for our VRFs. The most natural name to
use is the network name. Thus giving a cluster UDN a name below 15
characters that matches an already existing VRF not managed by ovn-k
will fail. This is considered an admin problem and not an ovn-k problem
for now.

Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Was causing deadlocks in unit tests

Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
…heir subcontrollers

Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Assuming that there is three types of controllers, being: network
agnostic, network aware and network specific; we were already notifying
network specific controllers of network changes. But network aware
controllers, controllers for which we have a single instance capable of
managing multiple networks, had no code path to be informed of netwokr
changes.

This commit adds a code path for that and makes the RouteAdvertisments
controller aware of network changes.

Changed ClusterManager to be the controller manager for cluster manager
instead of secondaryNetworkClusterManager. It just makes more sense that
way sice ClusterManager is the top level manager.

Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
Signed-off-by: Jaime CaamaΓ±o Ruiz <jcaamano@redhat.com>
…twork exist test

Signed-off-by: Or Mergi <ormergi@redhat.com>
On CUDN cleanup is inconsistent as we see some flaky tests due to CUDN
"already exist" errors, implying object are not actually deleted.

Wait for CUDN object be gone when deleted

Signed-off-by: Or Mergi <ormergi@redhat.com>
CUDN is cluster-scoped object, in case tests running in parallel,
having random names avoids conflicting with other tests.

Use random metadata.name for CUDN objects.

The "isolates overlapping CIDRs" tests create objects based on the
'red' and 'blue' variables, including CUDN objects.
Change the tests CUDN creation use random names and update the given
'networkAttachmentConfigParams' with the new generated name.
Update 'red' & 'blue' vaiables with the generated name, carried by
'networkAttachmentConfigParams' (netConfig.name).

The pod2Egress tests asserts on the CUDN object name given by 'userDefinedNetworkName'.
In practice the tests netConfigParam.name is userDefinedNetworkName.
Change the assertion to check the given netConfigParam.

Signed-off-by: Or Mergi <ormergi@redhat.com>
Signed-off-by: nithyar <nithyar@nvidia.com>
Signed-off-by: nithyar <nithyar@nvidia.com>
Reconcile RouteAdvertisements in cluster manager
Add missing enum validation for RouteAdvertisements
The NetPol test checks assigned pod IP only against IPv4 subnet
which would fail on IPv6 only cluster. This commit fixes it by
checking on all valid CIDRs.

Signed-off-by: Periyasamy Palanisamy <pepalani@redhat.com>
As of today the NetworkReady condition indicated a NAD has been created.
And not necessarily that the underlying network is ready to work with,
because it require other internal components to act (e.g.: set ovs ports,
ovn flows, etc..).

Rename the NetworkReady condition type to NetworkCreated so it describe
better what it indicates.

This change enable introducing alternative "NetworkReady" condition that
provider actual indication a UDN network is ready, and that other
internal component acted successfully.

Signed-off-by: Or Mergi <ormergi@redhat.com>
The variable ginkgo_focus is misspelled as gingko_focus.

As the latter var is not used anywhere else in this repo
and is used to concatenate the var ginkgo_focus in the
next line to ginkgoargs it seems to be a typo.

Fixes: #4942

Signed-off-by: Felix Schumacher <felix.schumacher@internetallee.de>
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 15, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/ee4ef990-d38c-11ef-9f34-2a6c32dd947c-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 15, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 0 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

1 similar comment
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 15, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 0 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 15, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.19-e2e-gcp-ovn-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/649aa2c0-d38d-11ef-9015-cb9b631a4677-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 16, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.19-e2e-gcp-ovn-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/293007b0-d415-11ef-9188-98fa23df9009-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 16, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/34aa6a40-d415-11ef-9742-820cb7fe953e-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 16, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-dualstack-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/4a66d670-d415-11ef-8bee-9426b528d2aa-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 17, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-dualstack-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/98779f10-d524-11ef-9798-8032ab0693e0-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 17, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/a4cc8320-d524-11ef-8c88-0c02d03b7698-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 18, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-dualstack-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/75f88580-d5ab-11ef-8bd2-69c2f06893a6-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 18, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/7fec2100-d5ab-11ef-86ea-c9dc40168bc5-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 20, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-dualstack-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/02d3a360-d74e-11ef-8630-5d1180a25ebe-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 20, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/15711c50-d74e-11ef-8be3-2aabce8e4ae3-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 20, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-dualstack-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/61ed1e90-d775-11ef-959a-f9c44b8972e9-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 20, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/6792e3c0-d775-11ef-96e9-9d78b7a7cd83-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 21, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-dualstack-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/3bc17300-d794-11ef-8664-0785553427de-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 21, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/41f3c0c0-d794-11ef-9959-2bf9f3271c30-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 21, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-dualstack-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/d2ad9f20-d828-11ef-9956-66ccf99d9085-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 21, 2025

@trozet: This PR was included in a payload test run from openshift/origin#29417
trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/d9e21aa0-d828-11ef-81d5-9d0d46ebeb89-0

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 26, 2025
@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2025
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 10, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 13, 2025

@trozet: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-upgrade-local-gateway bcf31ed link true /test e2e-aws-ovn-upgrade-local-gateway
ci/prow/e2e-metal-ipi-ovn-dualstack-techpreview bcf31ed link false /test e2e-metal-ipi-ovn-dualstack-techpreview
ci/prow/e2e-metal-ipi-ovn-techpreview bcf31ed link false /test e2e-metal-ipi-ovn-techpreview
ci/prow/security bcf31ed link false /test security
ci/prow/e2e-aws-ovn-single-node-techpreview bcf31ed link false /test e2e-aws-ovn-single-node-techpreview
ci/prow/e2e-vsphere-ovn-techpreview bcf31ed link false /test e2e-vsphere-ovn-techpreview
ci/prow/e2e-azure-ovn bcf31ed link false /test e2e-azure-ovn
ci/prow/e2e-aws-ovn-techpreview bcf31ed link false /test e2e-aws-ovn-techpreview
ci/prow/e2e-aws-ovn-hypershift-conformance-techpreview bcf31ed link false /test e2e-aws-ovn-hypershift-conformance-techpreview
ci/prow/e2e-openstack-ovn bcf31ed link false /test e2e-openstack-ovn
ci/prow/e2e-metal-ipi-ovn-dualstack-local-gateway-techpreview bcf31ed link false /test e2e-metal-ipi-ovn-dualstack-local-gateway-techpreview
ci/prow/e2e-metal-ipi-ovn-ipv6-techpreview bcf31ed link false /test e2e-metal-ipi-ovn-ipv6-techpreview
ci/prow/openshift-e2e-gcp-ovn-techpreview-upgrade bcf31ed link false /test openshift-e2e-gcp-ovn-techpreview-upgrade
ci/prow/4.19-upgrade-from-stable-4.18-e2e-gcp-ovn-rt-upgrade bcf31ed link true /test 4.19-upgrade-from-stable-4.18-e2e-gcp-ovn-rt-upgrade
ci/prow/e2e-gcp-ovn-techpreview bcf31ed link true /test e2e-gcp-ovn-techpreview
ci/prow/e2e-azure-ovn-upgrade bcf31ed link true /test e2e-azure-ovn-upgrade
ci/prow/e2e-azure-ovn-techpreview bcf31ed link false /test e2e-azure-ovn-techpreview
ci/prow/e2e-aws-ovn-edge-zones bcf31ed link true /test e2e-aws-ovn-edge-zones
ci/prow/e2e-metal-ipi-ovn-dualstack-bgp-local-gw bcf31ed link true /test e2e-metal-ipi-ovn-dualstack-bgp-local-gw
ci/prow/e2e-metal-ipi-ovn-dualstack-bgp bcf31ed link true /test e2e-metal-ipi-ovn-dualstack-bgp

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Jul 14, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 14, 2025

@openshift-bot: Closed this PR.

Details

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.

Projects

None yet

Development

Successfully merging this pull request may close these issues.