Skip to content

Comments

Auto sync upstream 2022 08 04 07 47#35

Closed
nunnatsa wants to merge 64 commits intoopenshift:mainfrom
nunnatsa:auto_sync_upstream_2022-08-04-07-47
Closed

Auto sync upstream 2022 08 04 07 47#35
nunnatsa wants to merge 64 commits intoopenshift:mainfrom
nunnatsa:auto_sync_upstream_2022-08-04-07-47

Conversation

@nunnatsa
Copy link

@nunnatsa nunnatsa commented Aug 4, 2022

What this PR does / why we need it:

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #

Special notes for your reviewer:

Release notes:

Auto sync upstream 2022-08-04

Isaac Dorfman and others added 30 commits April 27, 2022 15:26
…r-integration

Added integration with the image-builder kubevirt image
In order to debug e2e test failures, add failure printout to the RunCmd
function, so we could understand what went wrong in case of a failure.

Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
Add failure printout to the e2e test RunCmd function
…r-docs

added docs for creating the image-builder image
Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
…access

kubevirtci: Add option to interact with the tenant cluster and improve usage (help)
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Add the ability to access the tenant cluster during the test
Currently checking only the Nodes exist, need to add more functionality
(For example adding new operators/workloads using the network and storage)
In order to have this ability, added the following:
1. Using virtctl client for kubevirt resources
2. Performing Virtctl portforward and modifying the tenant cluster
   kubeconfig
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
Add tenant cluster verification to create-cluster test
Signed-off-by: David Vossel <davidvossel@gmail.com>
The domain and cidr ranges of the tenant cluster need to be offset
from the default kubeadm settings or else they will likely conflict
and overlap with the infra network's settings

Signed-off-by: David Vossel <davidvossel@gmail.com>
Signed-off-by: David Vossel <davidvossel@gmail.com>
The test occasionally failed because the Update() vmi call
returned an err. This err occurred because the vmi would mutate
between the Get and Update calls. To fix this, we use a Patch

Signed-off-by: David Vossel <davidvossel@gmail.com>
pjaton and others added 14 commits July 28, 2022 06:05
Add capk user properly when users already defined.
Also, add unit-test for the infracluster package.
Copy Kubeadm userdata secret labels to CAPK secret.
…ookup

Fix infra secret lookup so that it defaults to the namespace of the referencing resource
When a host node is deleted, the guest VMIs are evicted, killing the
running tenant node in this VMI.

This PR gracefully delete the VMI, by first drain the guest node. Only
if the drain successful, CAPK will delete the VMI.

The PR is based on a feature in KubeVirt, where by setting the eviction
strategy to "external" KubeVirt will not delete the VMI, but will set
the `vmi.Status.EvacuationNodeName` field to signal to the external
controller (CAPK in this case) that the VMI should be evicted.

Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
Also, add unit tests. Needed to changed the logic to get the guest
cluster client, to use the workloadClient type, because the previous
code was not testable.

Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
If drain is failing for more than 10 minutes, delete the VMI anyway.

Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
Graceful deletion of a VirtualMachineInstance
…to_sync_upstream_2022-08-04-07-47

Signed-off-by: Nahshon Unna-Tsameret <nunnatsa@redhat.com>
@openshift-ci openshift-ci bot requested review from davidvossel and nirarg August 4, 2022 07:51
@openshift-ci
Copy link

openshift-ci bot commented Aug 4, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: nunnatsa

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 4, 2022
@davidvossel
Copy link

/retest

@openshift-ci
Copy link

openshift-ci bot commented Aug 9, 2022

@nunnatsa: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/cluster-api-provider-kubevirt-e2e b0ddcf1 link true /test cluster-api-provider-kubevirt-e2e

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@davidvossel
Copy link

ci is failing

+ sleep 60
+ oc wait pod --for=condition=Ready -l cluster.x-k8s.io/provider=infrastructure-kubevirt -n capk-system --timeout=120s
error: no matching resources found

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 10, 2022
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 10, 2022
@nunnatsa
Copy link
Author

nunnatsa commented Jan 8, 2023

too old
/close

@openshift-ci openshift-ci bot closed this Jan 8, 2023
@openshift-ci
Copy link

openshift-ci bot commented Jan 8, 2023

@nunnatsa: Closed this PR.

Details

In response to this:

too old
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@nunnatsa nunnatsa deleted the auto_sync_upstream_2022-08-04-07-47 branch January 8, 2023 20:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants