-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Add a test to verify that KubeVirt/CNV can be installed on OKD #4269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Hi @rmohr. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/cc @stevekuznetsov @trown @eparis @danielBelenky @gbenhaim Here a very naive approach on how the CNV/KubeVirt deployment can be tested. It just installs our meta-operator and checks if the kubevirt operator adds a ready condition on its CR. This would already have catched at least one issue which we had. I guess |
|
@rmohr: GitHub didn't allow me to request PR reviews from the following users: danielBelenky, gbenhaim. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/hold This is adding a test on installer repo. Why? |
|
@abhinavdahiya good question if I put it on the right repo. What we need is a test which verifies that latest okd code which is supposed to be released (e.g. release candidate, ...), is compatible with CNV. It should give us a heads-up that something changed in OKD which would break CNV. Is there a better place where this should be added? |
|
Why don't you create a periodic job...? |
Hm, maybe. I now moved it to openshift/origin presubmits. Would that make sense? Since what we want to see if API changes or internal changes of OKD changes break something in kubevirt (like it happened with strategic merge patch removal on SCCs). So I guess that would make more sense, since we are not interested in the install process as such. Periodic would work, but then the chance to discuss issues on the PRs which introduce changes already passed. |
|
A more technical question: I would need |
|
/cc @rthallisey FYI |
|
@rmohr: GitHub didn't allow me to request PR reviews from the following users: FYI. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
Should we provide our own @stevekuznetsov @trown @eparis any recommendations? |
|
Ok, there is now a TEST_IMAGE set which contains everything. Looking forward to get some feedback. :) |
|
Ping? |
|
/hold Generally we don't add presubmits to origin until the code is proven elsewhere, since there's lots of code that gets put into openshift that you can have elsewhere. I assume you just want to verify that "openshift continues to work with the latest cnv"? If so, you want to create a release periodic (see #3707) that runs periodically. The build-cop would then notify you if the job fails. We probably need to associate this with an email so the build cop knows who to yell at. |
@smarterclayton it's |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was something added elsewhere that is building this image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh i see, this is being hosted on dockerhub. i think we should do this the "right" way and introduce logic that will build the hco-tests image from a repo on each job run, the way our mainline e2e tests work.
Otherwise you're going to have a hard time synchronizing delivering test+code changes, or knowing why a job failed when you can't confirm what version of the tests it ran against.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to stick with :latest tag, I need to improve that on the hco side. The issue in my opinion is that there are no releases right now in the hco repo. I am not sure if we can properly solve the scenario with one job. I think we would need two: One which tests latest master (maybe like you suggest) and one which tests the latest release. I would not focus on this image too much for the initial version, if that is fine for you.
|
I think the request is to make it a periodic. I think Jessica laid out a reasonable test of layered product testing: |
i'd like to understand why we wouldn't allow teams to choose to run these tests against their PRs. |
|
I think there's two ways for layered products to integrate:
I understood Jessica's answer to be regarding 2) and this PR is for 1). |
|
My email was actually about both 1 and 2. You can have e2e testing that runs in a periodic job, or that blocks the releases, no different than upgrades. We'd like to understand the requirement for per PR testing. Ben will be reaching out. |
|
@bparees, as suggested in #4563 (review), RELEASE_IMAGE_TAG is now blank again. |
yes that was your mistake for listening to me earlier. sorry. |
|
Looks good so far. rehearse job got past the point where it failed before. |
|
@bparees I tried the 4.2 and the 4.1 image. In both cases the rehearsal job times out after four hours. |
|
Here the relevant section from https://storage.googleapis.com/origin-ci-test/logs/rehearse-4269-canary-release-openshift-origin-installer-e2e-aws-4.2-cnv/8/build-log.txt: |
|
@bparees thanks a lot for your help! Everything is passing now. |
|
/ok-to-test |
|
@rmohr can you squash the commits and i'll lgtm it? |
Install the CNV hyperconverged-cluster-operator and run some tests.
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bparees, rmohr The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/hold cancel |
|
@rmohr: Updated the
DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
@rmohr: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
After openshift is installed on AWS, install the HCO operator from the
kubevirt project and wait on the CR of the kubevirt operator for the
ready condition to occur.
First version, which will not yet run many tests. It only verifies if HCO can be successfully installed. It uses a promoted image from presubmits + the
srcimage to deploy HCO and ran the basic sanity checks.