-
Notifications
You must be signed in to change notification settings - Fork 213
Bug 1929917: pkg/cvo/sync_worker: Skip precreation of baremetal ClusterOperator #531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1929917: pkg/cvo/sync_worker: Skip precreation of baremetal ClusterOperator #531
Conversation
|
@wking: This pull request references Bugzilla bug 1929917, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
Requesting review from QA contact: DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
04d11da to
7731ee7
Compare
|
/lgtm |
This is a hack fix for [1], where we have a delay on 4.6->4.7 updates, and on some 4.7 installs, between the very early ClusterOperator precreation and the operator eventually coming up to set its status conditions. In the interim, there are no conditions, which causes cluster_operator_up to be 0, which causes the critical ClusterOperatorDown to fire. We'll want a more general fix going forward, this commit is a temporary hack to avoid firing the critical ClusterOperatorDown while we build consensus around the general fix. The downside to dropping precreates for this operator is that we lose the must-gather references when the operator fails to come up. That was what precreation was designed to address in 2a469e3 (cvo: When installing or upgrading, fast-fill cluster-operators, 2020-02-07, openshift#318). If we actually get a must-gather without the bare-metal bits and we miss them, we can revisit the approach this hack is taking. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1929917
7731ee7 to
fdef37d
Compare
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jottofar, wking The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
Previous update job had only unrelated failures. /override ci/prow/e2e-agnostic-upgrade |
|
@wking: Overrode contexts on behalf of wking: ci/prow/e2e-agnostic-upgrade DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
MCO got stuck, unrelated to the PR /override ci/prow/e2e-agnostic-upgrade |
|
@wking: Overrode contexts on behalf of wking: ci/prow/e2e-agnostic-upgrade DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
@wking: All pull requests linked via external trackers have merged: Bugzilla bug 1929917 has been moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/cherrypick release-4.7 |
|
@wking: new pull request created: #534 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…fest-task node ClusterOperator pre-creation landed in 2a469e3 (cvo: When installing or upgrading, fast-fill cluster-operators, 2020-02-07, openshift#318) to move us from: 1. CVO creates a namespace for an operator. 2. CVO creates ... for the operator. 3. CVO creates the operator Deployment. 4. Operator deployment never comes up, for whatever reason. 5. Admin must-gathers. 6. Must gather uses ClusterOperators for discovering important stuff, and because the ClusterOperator doesn't exist yet, we get no data about why the deployment didn't come up. to: 1. CVO pre-creates ClusterOperator for an operator. 2. CVO creates the namespace for an operator. 3. CVO creates ... for the operator. 4. CVO creates the operator Deployment. 5. Operator deployment never comes up, for whatever reason. 6. Admin must-gathers. 7. Must gather uses ClusterOperators for discovering important stuff, and finds the one the CVO had pre-created with hard-coded relatedObjects, gathers stuff from the referenced operator namespace, and allows us to trouble-shoot the issue. But when ClusterOperator pre-creation happens at the beginning of an update sync cycle, it can take a while before the CVO gets from the ClusterOperator creation in (1) to the operator managing that ClusterOperator in (4), which can lead to ClusterOperatorDown alerts [1,2]. fdef37d (pkg/cvo/sync_worker: Skip precreation of baremetal ClusterOperator, 2021-03-16, openshift#531) landed a narrow hack to avoid issues on 4.6 -> 4.7 updates, which added the baremetal operator [1]. But we're adding a cloud-controller-manager operator in 4.7 -> 4.8, and breaking the same way [2]. This commit pivots to a more generic fix, by delaying the pre-creation until the CVO reaches the manifest-task node containing the ClusterOperator manifest. That will usually be the same node that has the other critical operator manifests like the namespace, RBAC, and operator deployment. Dropping fdef37d's baremetal hack will re-expose us to issues on install, where we race through all the manifests as fast as possible. It's possible that we will now pre-create the ClusterOperator early (because it's only blocked by the CRD) and still be a ways in front of the operator pod coming up (because that needs a schedulable control-plane node). But we can address that by surpressing ClusterOperatorDown and ClusterOperatorDegraded for some portion of install in follow-up work. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1929917 [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1957775
…fest-task node ClusterOperator pre-creation landed in 2a469e3 (cvo: When installing or upgrading, fast-fill cluster-operators, 2020-02-07, openshift#318) to move us from: 1. CVO creates a namespace for an operator. 2. CVO creates ... for the operator. 3. CVO creates the operator Deployment. 4. Operator deployment never comes up, for whatever reason. 5. Admin must-gathers. 6. Must gather uses ClusterOperators for discovering important stuff, and because the ClusterOperator doesn't exist yet, we get no data about why the deployment didn't come up. to: 1. CVO pre-creates ClusterOperator for an operator. 2. CVO creates the namespace for an operator. 3. CVO creates ... for the operator. 4. CVO creates the operator Deployment. 5. Operator deployment never comes up, for whatever reason. 6. Admin must-gathers. 7. Must gather uses ClusterOperators for discovering important stuff, and finds the one the CVO had pre-created with hard-coded relatedObjects, gathers stuff from the referenced operator namespace, and allows us to trouble-shoot the issue. But when ClusterOperator pre-creation happens at the beginning of an update sync cycle, it can take a while before the CVO gets from the ClusterOperator creation in (1) to the operator managing that ClusterOperator in (4), which can lead to ClusterOperatorDown alerts [1,2]. fdef37d (pkg/cvo/sync_worker: Skip precreation of baremetal ClusterOperator, 2021-03-16, openshift#531) landed a narrow hack to avoid issues on 4.6 -> 4.7 updates, which added the baremetal operator [1]. But we're adding a cloud-controller-manager operator in 4.7 -> 4.8, and breaking the same way [2]. This commit pivots to a more generic fix, by delaying the pre-creation until the CVO reaches the manifest-task node containing the ClusterOperator manifest. That will usually be the same node that has the other critical operator manifests like the namespace, RBAC, and operator deployment. Dropping fdef37d's baremetal hack will re-expose us to issues on install, where we race through all the manifests as fast as possible. It's possible that we will now pre-create the ClusterOperator early (because it's only blocked by the CRD) and still be a ways in front of the operator pod coming up (because that needs a schedulable control-plane node). But we can address that by surpressing ClusterOperatorDown and ClusterOperatorDegraded for some portion of install in follow-up work. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1929917 [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1957775
This is a hack fix for rhbz#1929917, where we have a delay on 4.6->4.7 updates, and on some 4.7 installs, between the very early ClusterOperator precreation and the operator eventually coming up to set its status conditions. In the interim, there are no conditions, which causes
cluster_operator_upto be 0, which causes thecriticalClusterOperatorDownto fire. We'll want a more general fix going forward, this commit is a temporary hack to avoid firing the criticalClusterOperatorDownwhile we build consensus around the general fix.The downside to dropping precreates for this operator is that we lose the must-gather references when the operator fails to come up. That was what precreation was designed to address in 2a469e3 (#318). If we actually get a must-gather without the bare-metal bits and we miss them, we can revisit the approach this hack is taking.