Skip to content

Conversation

@wking
Copy link
Member

@wking wking commented May 7, 2021

During install, the CVO has pushed manifests into the cluster as fast as possible without blocking on "has the in-cluster resource leveled?" since way back in b0b4902 (#136). That can lead to ClusterOperatorDown and ClusterOperatorDegraded firing during install, as we see here, where:

ClusterOperatorDown is similar, but I'll leave addressing it to a separate commit. For ClusterOperatorDegraded, the degraded condition should not be particularly urgent, so we should be find bumping it to warning and using for: 30m or something more relaxed than the current 10m.

…usterOperatorDegraded

During install, the CVO has pushed manifests into the cluster as fast
as possible without blocking on "has the in-cluster resource leveled?"
since way back in b0b4902 (clusteroperator: Don't block on failing
during initialization, 2019-03-11, openshift#136).  That can lead to
ClusterOperatorDown and ClusterOperatorDegraded firing during install,
as we see in [1], where:

* ClusterOperatorDegraded started pending at 5:00:15Z [2].
* Install completed at 5:09:58Z [3].
* ClusterOperatorDegraded started firing at 5:10:04Z [2].
* ClusterOperatorDegraded stopped firing at 5:10:23Z [2].
* The e2e suite complained about [1]:

    alert ClusterOperatorDegraded fired for 15 seconds with labels: {... name="authentication"...} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1939580)

ClusterOperatorDown is similar, but I'll leave addressing it to a
separate commit.  For ClusterOperatorDegraded, the degraded condition
should not be particularly urgent [4], so we should be find bumping it
to 'warning' and using 'for: 30m' or something more relaxed than the
current 10m.

[1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.8/1389436726862155776
[2]: https://promecieus.dptools.openshift.org/?search=https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.8/1389436726862155776
     group by (alertstate) (ALERTS{alertname="ClusterOperatorDegraded"})
[3]: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.8/1389436726862155776/artifacts/e2e-aws-upi/clusterversion.json
[4]: openshift/api#916
@openshift-ci openshift-ci bot added bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels May 7, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 7, 2021

@wking: This pull request references Bugzilla bug 1957991, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.8.0) matches configured target release for branch (4.8.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @jianlinliu

Details

In response to this:

Bug 1957991: install/0000_90_cluster-version-operator_02_servicemonitor: Soften ClusterOperatorDegraded

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 7, 2021
@wking wking mentioned this pull request May 7, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 7, 2021

@wking: This pull request references Bugzilla bug 1957991, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.8.0) matches configured target release for branch (4.8.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @jianlinliu

Details

In response to this:

Bug 1957991: install/0000_90_cluster-version-operator_02_servicemonitor: Soften ClusterOperatorDegraded

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

for: 30m
labels:
severity: critical
severity: warning
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m ok with this. Warnings and degraded are for the morning. Do we have an estimate of how much churn this removes in the fleet?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Poking around in recent Telemetry, the improvement for running clusters is expected to be small but noticeable. I expect impact during install to be larger, but haven't worked up numbers. We should be able to scrape it out of test-case JUnit post-merge.

@smarterclayton
Copy link
Contributor

/approve
/lgtm

Has this mindset change been communicated to admin console and support teams?

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label May 7, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 7, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: smarterclayton, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [smarterclayton,wking]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

6 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@LalatenduMohanty
Copy link
Member

CC @openshift/sre-alert-sme, in case you want to review the changes.

@wking
Copy link
Member Author

wking commented May 7, 2021

upgrade failure is unrelated Kube-API-server connectivity issues on Azure. Shouldn't block this landing.

/override ci/prow/e2e-agnostic-upgrade

@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 7, 2021

@wking: Overrode contexts on behalf of wking: ci/prow/e2e-agnostic-upgrade

Details

In response to this:

upgrade failure is unrelated Kube-API-server connectivity issues on Azure. Shouldn't block this landing.

/override ci/prow/e2e-agnostic-upgrade

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-merge-robot openshift-merge-robot merged commit 4fc460a into openshift:master May 7, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 7, 2021

@wking: All pull requests linked via external trackers have merged:

Bugzilla bug 1957991 has been moved to the MODIFIED state.

Details

In response to this:

Bug 1957991: install/0000_90_cluster-version-operator_02_servicemonitor: Soften ClusterOperatorDegraded

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wking wking deleted the ClusterOperatorDegraded-softening branch May 7, 2021 21:12
group by (name) (cluster_operator_up{job="cluster-version-operator"})
) == 1
for: 10m
for: 30m

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 83 line has such message - "Cluster operator {{ "{{ $labels.name }}" }} has been degraded for 10 minutes". Should be also update "10 minutes" to "30 minutes"?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, nice catch. I've opened #556 to fix.

wking added a commit to wking/cluster-version-operator that referenced this pull request May 8, 2021
…usterOperatorDegraded message to 30m

Catching up with fb5257d
(install/0000_90_cluster-version-operator_02_servicemonitor: Soften
ClusterOperatorDegraded, 2021-05-06, openshift#554).
wking added a commit to wking/cluster-version-operator that referenced this pull request Jun 8, 2021
…usterOperatorDegraded

During install, the CVO has pushed manifests into the cluster as fast
as possible without blocking on "has the in-cluster resource leveled?"
since way back in b0b4902 (clusteroperator: Don't block on failing
during initialization, 2019-03-11, openshift#136).  That can lead to
ClusterOperatorDown and ClusterOperatorDegraded firing during install,
as we see in [1], where:

* ClusterOperatorDegraded started pending at 5:00:15Z [2].
* Install completed at 5:09:58Z [3].
* ClusterOperatorDegraded started firing at 5:10:04Z [2].
* ClusterOperatorDegraded stopped firing at 5:10:23Z [2].
* The e2e suite complained about [1]:

    alert ClusterOperatorDegraded fired for 15 seconds with labels: {... name="authentication"...} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1939580)

ClusterOperatorDown is similar, but I'll leave addressing it to a
separate commit.  For ClusterOperatorDegraded, the degraded condition
should not be particularly urgent [4], so we should be find bumping it
to 'warning' and using 'for: 30m' or something more relaxed than the
current 10m.

This commit brings back

* fb5257d
  (install/0000_90_cluster-version-operator_02_servicemonitor: Soften
  ClusterOperatorDegraded, 2021-05-06, openshift#554) and
* 92ed7f1
  (install/0000_90_cluster-version-operator_02_servicemonitor: Update
  ClusterOperatorDegraded message to 30m, 2021-05-08, openshift#556).

There are some conflicts, because I am not bringing back 90539f9
(pkg/cvo/metrics: Ignore Degraded for cluster_operator_up, 2021-04-26, openshift#550).
But that one had its own conflicts in metrics.go [5], and the
conflicts with this commit were orthogonal context issues, so moving
this back to 4.7 first won't make it much harder to bring back openshift#550
and such later on, if we decide to do that.

[1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.8/1389436726862155776
[2]: https://promecieus.dptools.openshift.org/?search=https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.8/1389436726862155776
     group by (alertstate) (ALERTS{alertname="ClusterOperatorDegraded"})
[3]: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.8/1389436726862155776/artifacts/e2e-aws-upi/clusterversion.json
[4]: openshift/api#916
[5]: openshift#550 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants