Skip to content

Conversation

@LalatenduMohanty
Copy link
Member

No description provided.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Dec 15, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 15, 2023
@openshift-ci-robot
Copy link

openshift-ci-robot commented Dec 15, 2023

@LalatenduMohanty: This pull request references MCO-958 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.16.0" version, but no target version was set.

Details

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 15, 2023
@LalatenduMohanty LalatenduMohanty force-pushed the MCO-958 branch 3 times, most recently from 2ff0319 to ff606a8 Compare December 15, 2023 20:07
url: https://issues.redhat.com/browse/MCO-958
name: NewMachieSet-ForWorkerNode-NotWorking
message: |-
Adding a new worker node will fail for clusters running on ARO.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this may affect nodes on reboot but we don't know it yet, anyway this is fine for now and we can amend it later.

@LalatenduMohanty LalatenduMohanty force-pushed the MCO-958 branch 3 times, most recently from 39f2c02 to 276c677 Compare December 15, 2023 20:37
@LalatenduMohanty LalatenduMohanty changed the title WIP: MCO-958: Blocking edges to 4.14.2+ and 4.13.25+ MCO-958: Blocking edges to 4.14.2+ and 4.13.25+ Dec 15, 2023
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 15, 2023
to: 4.13.25
from: .*
url: https://issues.redhat.com/browse/MCO-958
name: AROBrokenDNSMasq
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
name: AROBrokenDNSMasq
name: brokenARODNSMasq

seems like not being camel case is breaking the CI job.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I have fixed the formatting issue which was causing the tests to fail. https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_cincinnati-graph-data/4524/pull-ci-openshift-cincinnati-graph-data-master-validate-blocked-edges/1735761018186895360 , the camecase issue was in the past but it is not there anymore.

@sdodson
Copy link
Member

sdodson commented Dec 15, 2023

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 15, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 15, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: LalatenduMohanty, sdodson

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit 2d338f3 into openshift:master Dec 15, 2023
wking added a commit to wking/cluster-version-operator that referenced this pull request Dec 20, 2023
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on
failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates
round that does anything useful has a fresh Cincinnati pull" to "some
syncAvailableUpdates rounds have a fresh Cincinnati pull, but others
just re-eval some Recommended=Unknown conditional updates".  Then
syncAvailableUpdates calls setAvailableUpdates.

However, until this commit, setAvailableUpdates had been bumping
LastAttempt every time, even in the just-re-eval conditional updates"
case.  That meant we never tripped the:

        } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) {
                klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339))

condition to trigger a fresh Cincinnati pull.  Which could lead to
deadlocks like:

1. Cincinnati serves vulnerable PromQL, like [1].
2. Clusters pick up that broken PromQL, try to evaluate, and fail.
   Re-eval-and-fail loop continues.
3. Cincinnati PromQL fixed, like [2].
4. Cases:
   a. Before 965bfb2, and also after this commit, Clusters pick up
      the fixed PromQL, try to evaluate, and start succeeding.  Hooray!
   b. Clusters with 965bfb2 but without this commit say "it's been
      a long time since we pulled fresh Cincinanti information, but it
      has not been long since my last attempt to evel this broken
      PromQL, so let me skip the Cincinnati pull and re-eval that old
      PromQL", which fails.  Re-eval-and-fail loop continues.

To break out of 4.b, clusters on impacted releases can roll their CVO
pod:

  $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod

which will clear out LastAttempt and trigger a fresh Cincinnati pull.
I'm not sure if there's another recovery method...

[1]: openshift/cincinnati-graph-data#4524
[2]: openshift/cincinnati-graph-data#4528
wking added a commit to wking/cluster-version-operator that referenced this pull request Dec 20, 2023
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on
failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates
round that does anything useful has a fresh Cincinnati pull" to "some
syncAvailableUpdates rounds have a fresh Cincinnati pull, but others
just re-eval some Recommended=Unknown conditional updates".  Then
syncAvailableUpdates calls setAvailableUpdates.

However, until this commit, setAvailableUpdates had been bumping
LastAttempt every time, even in the just-re-eval conditional updates"
case.  That meant we never tripped the:

        } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) {
                klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339))

condition to trigger a fresh Cincinnati pull.  Which could lead to
deadlocks like:

1. Cincinnati serves vulnerable PromQL, like [1].
2. Clusters pick up that broken PromQL, try to evaluate, and fail.
   Re-eval-and-fail loop continues.
3. Cincinnati PromQL fixed, like [2].
4. Cases:
   a. Before 965bfb2, and also after this commit, Clusters pick up
      the fixed PromQL, try to evaluate, and start succeeding.  Hooray!
   b. Clusters with 965bfb2 but without this commit say "it's been
      a long time since we pulled fresh Cincinanti information, but it
      has not been long since my last attempt to eval this broken
      PromQL, so let me skip the Cincinnati pull and re-eval that old
      PromQL", which fails.  Re-eval-and-fail loop continues.

To break out of 4.b, clusters on impacted releases can roll their CVO
pod:

  $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod

which will clear out LastAttempt and trigger a fresh Cincinnati pull.
I'm not sure if there's another recovery method...

[1]: openshift/cincinnati-graph-data#4524
[2]: openshift/cincinnati-graph-data#4528
wking added a commit to wking/cincinnati-graph-data that referenced this pull request Dec 21, 2023
Miguel points out that the exposure set is more complicated [1] than
what I'd done in 45eb9ea (blocked-edges/4.14*: Declare
AzureDefaultVMType, openshift#4541).  It's:

* Azure born in 4.8 or earlier are exposed.  Both ARO (which creates
  clusters with Hive?) and clusters created via openshift-installer.
* ARO clusters created in 4.13 and earlier are exposed.

Generated by updating the 4.14.1 risk by hand, and then running:

  $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

Breaking down the logic for my new PromQL:

a. First stanza, using topk is likely unecessary, but if we do happen
   to have multiple matches for some reason, we'll take the highest.
   That gives us a "we match" 1 (if any aggregated entries were 1) or
   a "we don't match" (if they were all 0), instead of "we're having a
   hard time figuring out" Recommended=Unknown.

   a. If the cluster is ARO (using cluster_operator_conditions, as in
      ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15,
      openshift#4524), first stanza is 1.  Otherwise, 'or' falls back to...

   b. Nested block, again with the cautious topk:

      a. If there are no cluster_operator_conditions, don't return a
         time series.  This ensures that "we didn't match a.a, but we
         might be ARO, and just have cluster_operator_conditions
         aggregation broken" gives us a Recommended=Unknown evaluation
         failure.

      b. Nested block, again with the cautious topk:

         a. born_by_4_9 yes case, with 4.(<=9) instead of the desired
            4.(<=8) because of the "old CVO bugs make it hard to
            distinguish between 4.(<=9) birth-versions" issue
            discussed in 034fa01 (blocked-edges/4.12.*: Declare
            AWSOldBootImages, 2022-12-14, openshift#2909).  Otherwise, 'or'
            falls back to...

         b. A check to ensure cluster_version{type="initial"} is
            working.  This ensures that "we didn't match the a.b.b.a
            born_by_4_9 yes case, but we be that old, and just have
            cluster_version aggregation broken" gives us a
            Recommended=Unknown evaluation failure.

b. Second stanza, again with the cautious topk:

   a. cluster_infrastructure_provider is Azure.  Otherwise, 'or' falls
      back to...

   b. If there are no cluster_infrastructure_provider, don't return a
      time series.  This ensures that "we didn't match b.a, but we
      might be Azure, and just have cluster_infrastructure_provider
      aggregation broken" gives us a Recommended=Unknown evaluation
      failure.

So walking some cases:

* Non-Azure cluster, cluster_operator_conditions, cluster_version, and
  cluster_infrastructure_provider all working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b could be 1 or 0 for cluster_version.
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (1 or 0) * 0 = 0, cluster does not match.
* Non-Azure cluster, cluster_version is broken:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b matches no series (cluster_version is broken).
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown.
    Admin gets to figure out what's broken with cluster_version and/or
    manually assess their exposure based on the message and linked
    URI.
* Non-ARO Azure cluster born in 4.9, all time-series working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b.a matches born_by_4_9 yes.
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.9, all time-series working:
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.13, all time-series working (this is the case
  I'm fixing with this commit):
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster, cluster_operator_conditions is broken.
  * a.a matches no series (cluster_operator_conditions) is broken.
  * a.b.a matches no series (cluster_operator_conditions) is broken.
  * b.a matches (Azure)
  * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown.
* ARO cluster, cluster_infrastructure_provider is broken.
  * a.a matches (ARO).
  * b.a matches no series (cluster_infrastructure_provider) is broken.
  * b.b matches no series (cluster_infrastructure_provider) is broken.
  * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown.
    We could add logic like a cluster_operator_conditions{name="aro"}
    check to the (b) stanza if we wanted to bakein "all ARO clusters
    are Azure" knowledge to successfully evaluate this case.  But I'd
    guess cluster_infrastructure_provider is working in most ARO
    clusters, and this PromQL is already complicated enough, so I
    haven't bothered with that level of tuning.
* ...lots of other combinations...

[1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
wking added a commit to wking/cincinnati-graph-data that referenced this pull request Dec 21, 2023
Miguel points out that the exposure set is more complicated [1] than
what I'd done in 45eb9ea (blocked-edges/4.14*: Declare
AzureDefaultVMType, openshift#4541).  It's:

* Azure born in 4.8 or earlier are exposed.  Both ARO (which creates
  clusters with Hive?) and clusters created via openshift-installer.
* ARO clusters created in 4.13 and earlier are exposed.

Generated by updating the 4.14.1 risk by hand, and then running:

  $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

Breaking down the logic for my new PromQL:

a. First stanza, using topk is likely unecessary, but if we do happen
   to have multiple matches for some reason, we'll take the highest.
   That gives us a "we match" 1 (if any aggregated entries were 1) or
   a "we don't match" (if they were all 0), instead of "we're having a
   hard time figuring out" Recommended=Unknown.

   a. If the cluster is ARO (using cluster_operator_conditions, as in
      ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15,
      openshift#4524), first stanza is 1.  Otherwise, 'or' falls back to...

   b. Nested block, again with the cautious topk:

      a. If there are no cluster_operator_conditions, don't return a
         time series.  This ensures that "we didn't match a.a, but we
         might be ARO, and just have cluster_operator_conditions
         aggregation broken" gives us a Recommended=Unknown evaluation
         failure.

      b. Nested block, again with the cautious topk:

         a. born_by_4_9 yes case, with 4.(<=9) instead of the desired
            4.(<=8) because of the "old CVO bugs make it hard to
            distinguish between 4.(<=9) birth-versions" issue
            discussed in 034fa01 (blocked-edges/4.12.*: Declare
            AWSOldBootImages, 2022-12-14, openshift#2909).  Otherwise, 'or'
            falls back to...

         b. A check to ensure cluster_version{type="initial"} is
            working.  This ensures that "we didn't match the a.b.b.a
            born_by_4_9 yes case, but we be that old, and just have
            cluster_version aggregation broken" gives us a
            Recommended=Unknown evaluation failure.

b. Second stanza, again with the cautious topk:

   a. cluster_infrastructure_provider is Azure.  Otherwise, 'or' falls
      back to...

   b. If there are no cluster_infrastructure_provider, don't return a
      time series.  This ensures that "we didn't match b.a, but we
      might be Azure, and just have cluster_infrastructure_provider
      aggregation broken" gives us a Recommended=Unknown evaluation
      failure.

So walking some cases:

* Non-Azure cluster, cluster_operator_conditions, cluster_version, and
  cluster_infrastructure_provider all working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b could be 1 or 0 for cluster_version.
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (1 or 0) * 0 = 0, cluster does not match.
* Non-Azure cluster, cluster_version is broken:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b matches no series (cluster_version is broken).
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown.
    Admin gets to figure out what's broken with cluster_version and/or
    manually assess their exposure based on the message and linked
    URI.
* Non-ARO Azure cluster born in 4.9, all time-series working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b.a matches born_by_4_9 yes.
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.9, all time-series working:
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.13, all time-series working (this is the case
  I'm fixing with this commit):
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster, cluster_operator_conditions is broken.
  * a.a matches no series (cluster_operator_conditions) is broken.
  * a.b.a matches no series (cluster_operator_conditions) is broken.
  * b.a matches (Azure)
  * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown.
* ARO cluster, cluster_infrastructure_provider is broken.
  * a.a matches (ARO).
  * b.a matches no series (cluster_infrastructure_provider) is broken.
  * b.b matches no series (cluster_infrastructure_provider) is broken.
  * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown.
    We could add logic like a cluster_operator_conditions{name="aro"}
    check to the (b) stanza if we wanted to bakein "all ARO clusters
    are Azure" knowledge to successfully evaluate this case.  But I'd
    guess cluster_infrastructure_provider is working in most ARO
    clusters, and this PromQL is already complicated enough, so I
    haven't bothered with that level of tuning.
* ...lots of other combinations...

[1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
wking added a commit to wking/cincinnati-graph-data that referenced this pull request Dec 21, 2023
Miguel points out that the exposure set is more complicated [1] than
what I'd done in 45eb9ea (blocked-edges/4.14*: Declare
AzureDefaultVMType, openshift#4541).  It's:

* Azure born in 4.8 or earlier are exposed.  Both ARO (which creates
  clusters with Hive?) and clusters created via openshift-installer.
* ARO clusters created in 4.13 and earlier are exposed.

Generated by updating the 4.14.1 risk by hand, and then running:

  $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done

Breaking down the logic for my new PromQL:

a. First stanza, using topk is likely unecessary, but if we do happen
   to have multiple matches for some reason, we'll take the highest.
   That gives us a "we match" 1 (if any aggregated entries were 1) or
   a "we don't match" (if they were all 0), instead of "we're having a
   hard time figuring out" Recommended=Unknown.

   a. If the cluster is ARO (using cluster_operator_conditions, as in
      ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15,
      openshift#4524), first stanza is 1.  Otherwise, 'or' falls back to...

   b. Nested block, again with the cautious topk:

      a. If there are no cluster_operator_conditions, don't return a
         time series.  This ensures that "we didn't match a.a, but we
         might be ARO, and just have cluster_operator_conditions
         aggregation broken" gives us a Recommended=Unknown evaluation
         failure.

      b. Nested block, again with the cautious topk:

         a. born_by_4_9 yes case, with 4.(<=9) instead of the desired
            4.(<=8) because of the "old CVO bugs make it hard to
            distinguish between 4.(<=9) birth-versions" issue
            discussed in 034fa01 (blocked-edges/4.12.*: Declare
            AWSOldBootImages, 2022-12-14, openshift#2909).  Otherwise, 'or'
            falls back to...

         b. A check to ensure cluster_version{type="initial"} is
            working.  This ensures that "we didn't match the a.b.b.a
            born_by_4_9 yes case, but we be that old, and just have
            cluster_version aggregation broken" gives us a
            Recommended=Unknown evaluation failure.

b. Second stanza, again with the cautious topk:

   a. cluster_infrastructure_provider is Azure.  Otherwise, 'or' falls
      back to...

   b. If there are no cluster_infrastructure_provider, don't return a
      time series.  This ensures that "we didn't match b.a, but we
      might be Azure, and just have cluster_infrastructure_provider
      aggregation broken" gives us a Recommended=Unknown evaluation
      failure.

All of the _id filtering is for use in hosted clusters or other PromQL
stores that include multiple clusters.  More background in 5cb2e93
(blocked-edges/4.11.*-KeepalivedMulticastSkew: Explicit _id="",
2023-05-09, openshift#3591).

So walking some cases:

* Non-Azure cluster, cluster_operator_conditions, cluster_version, and
  cluster_infrastructure_provider all working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b could be 1 or 0 for cluster_version.
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (1 or 0) * 0 = 0, cluster does not match.
* Non-Azure cluster, cluster_version is broken:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b matches no series (cluster_version is broken).
  * b.a matches no series (not Azure).
  * b.b gives 0 (confirming cluster_infrastructure_provider is working).
  * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown.
    Admin gets to figure out what's broken with cluster_version and/or
    manually assess their exposure based on the message and linked
    URI.
* Non-ARO Azure cluster born in 4.9, all time-series working:
  * a.a matches no series (not ARO).  Fall back to...
  * a.b.a confirms cluster_operator_conditions is working.
  * a.b.b.a matches born_by_4_9 yes.
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.9, all time-series working:
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster born in 4.13, all time-series working (this is the case
  I'm fixing with this commit):
  * a.a matches (ARO).
  * b.a matches (Azure).
  * 1 * 1 = 1, cluster matches.
* ARO cluster, cluster_operator_conditions is broken.
  * a.a matches no series (cluster_operator_conditions) is broken.
  * a.b.a matches no series (cluster_operator_conditions) is broken.
  * b.a matches (Azure)
  * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown.
* ARO cluster, cluster_infrastructure_provider is broken.
  * a.a matches (ARO).
  * b.a matches no series (cluster_infrastructure_provider) is broken.
  * b.b matches no series (cluster_infrastructure_provider) is broken.
  * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown.
    We could add logic like a cluster_operator_conditions{name="aro"}
    check to the (b) stanza if we wanted to bakein "all ARO clusters
    are Azure" knowledge to successfully evaluate this case.  But I'd
    guess cluster_infrastructure_provider is working in most ARO
    clusters, and this PromQL is already complicated enough, so I
    haven't bothered with that level of tuning.
* ...lots of other combinations...

[1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/cluster-version-operator that referenced this pull request Jan 2, 2024
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on
failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates
round that does anything useful has a fresh Cincinnati pull" to "some
syncAvailableUpdates rounds have a fresh Cincinnati pull, but others
just re-eval some Recommended=Unknown conditional updates".  Then
syncAvailableUpdates calls setAvailableUpdates.

However, until this commit, setAvailableUpdates had been bumping
LastAttempt every time, even in the just-re-eval conditional updates"
case.  That meant we never tripped the:

        } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) {
                klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339))

condition to trigger a fresh Cincinnati pull.  Which could lead to
deadlocks like:

1. Cincinnati serves vulnerable PromQL, like [1].
2. Clusters pick up that broken PromQL, try to evaluate, and fail.
   Re-eval-and-fail loop continues.
3. Cincinnati PromQL fixed, like [2].
4. Cases:
   a. Before 965bfb2, and also after this commit, Clusters pick up
      the fixed PromQL, try to evaluate, and start succeeding.  Hooray!
   b. Clusters with 965bfb2 but without this commit say "it's been
      a long time since we pulled fresh Cincinanti information, but it
      has not been long since my last attempt to eval this broken
      PromQL, so let me skip the Cincinnati pull and re-eval that old
      PromQL", which fails.  Re-eval-and-fail loop continues.

To break out of 4.b, clusters on impacted releases can roll their CVO
pod:

  $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod

which will clear out LastAttempt and trigger a fresh Cincinnati pull.
I'm not sure if there's another recovery method...

[1]: openshift/cincinnati-graph-data#4524
[2]: openshift/cincinnati-graph-data#4528
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/cluster-version-operator that referenced this pull request Jan 4, 2024
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on
failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates
round that does anything useful has a fresh Cincinnati pull" to "some
syncAvailableUpdates rounds have a fresh Cincinnati pull, but others
just re-eval some Recommended=Unknown conditional updates".  Then
syncAvailableUpdates calls setAvailableUpdates.

However, until this commit, setAvailableUpdates had been bumping
LastAttempt every time, even in the just-re-eval conditional updates"
case.  That meant we never tripped the:

        } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) {
                klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339))

condition to trigger a fresh Cincinnati pull.  Which could lead to
deadlocks like:

1. Cincinnati serves vulnerable PromQL, like [1].
2. Clusters pick up that broken PromQL, try to evaluate, and fail.
   Re-eval-and-fail loop continues.
3. Cincinnati PromQL fixed, like [2].
4. Cases:
   a. Before 965bfb2, and also after this commit, Clusters pick up
      the fixed PromQL, try to evaluate, and start succeeding.  Hooray!
   b. Clusters with 965bfb2 but without this commit say "it's been
      a long time since we pulled fresh Cincinanti information, but it
      has not been long since my last attempt to eval this broken
      PromQL, so let me skip the Cincinnati pull and re-eval that old
      PromQL", which fails.  Re-eval-and-fail loop continues.

To break out of 4.b, clusters on impacted releases can roll their CVO
pod:

  $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod

which will clear out LastAttempt and trigger a fresh Cincinnati pull.
I'm not sure if there's another recovery method...

[1]: openshift/cincinnati-graph-data#4524
[2]: openshift/cincinnati-graph-data#4528
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/cluster-version-operator that referenced this pull request Jan 12, 2024
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on
failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates
round that does anything useful has a fresh Cincinnati pull" to "some
syncAvailableUpdates rounds have a fresh Cincinnati pull, but others
just re-eval some Recommended=Unknown conditional updates".  Then
syncAvailableUpdates calls setAvailableUpdates.

However, until this commit, setAvailableUpdates had been bumping
LastAttempt every time, even in the just-re-eval conditional updates"
case.  That meant we never tripped the:

        } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) {
                klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339))

condition to trigger a fresh Cincinnati pull.  Which could lead to
deadlocks like:

1. Cincinnati serves vulnerable PromQL, like [1].
2. Clusters pick up that broken PromQL, try to evaluate, and fail.
   Re-eval-and-fail loop continues.
3. Cincinnati PromQL fixed, like [2].
4. Cases:
   a. Before 965bfb2, and also after this commit, Clusters pick up
      the fixed PromQL, try to evaluate, and start succeeding.  Hooray!
   b. Clusters with 965bfb2 but without this commit say "it's been
      a long time since we pulled fresh Cincinanti information, but it
      has not been long since my last attempt to eval this broken
      PromQL, so let me skip the Cincinnati pull and re-eval that old
      PromQL", which fails.  Re-eval-and-fail loop continues.

To break out of 4.b, clusters on impacted releases can roll their CVO
pod:

  $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod

which will clear out LastAttempt and trigger a fresh Cincinnati pull.
I'm not sure if there's another recovery method...

[1]: openshift/cincinnati-graph-data#4524
[2]: openshift/cincinnati-graph-data#4528
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/cluster-version-operator that referenced this pull request Jan 20, 2024
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on
failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates
round that does anything useful has a fresh Cincinnati pull" to "some
syncAvailableUpdates rounds have a fresh Cincinnati pull, but others
just re-eval some Recommended=Unknown conditional updates".  Then
syncAvailableUpdates calls setAvailableUpdates.

However, until this commit, setAvailableUpdates had been bumping
LastAttempt every time, even in the just-re-eval conditional updates"
case.  That meant we never tripped the:

        } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) {
                klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339))

condition to trigger a fresh Cincinnati pull.  Which could lead to
deadlocks like:

1. Cincinnati serves vulnerable PromQL, like [1].
2. Clusters pick up that broken PromQL, try to evaluate, and fail.
   Re-eval-and-fail loop continues.
3. Cincinnati PromQL fixed, like [2].
4. Cases:
   a. Before 965bfb2, and also after this commit, Clusters pick up
      the fixed PromQL, try to evaluate, and start succeeding.  Hooray!
   b. Clusters with 965bfb2 but without this commit say "it's been
      a long time since we pulled fresh Cincinanti information, but it
      has not been long since my last attempt to eval this broken
      PromQL, so let me skip the Cincinnati pull and re-eval that old
      PromQL", which fails.  Re-eval-and-fail loop continues.

To break out of 4.b, clusters on impacted releases can roll their CVO
pod:

  $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod

which will clear out LastAttempt and trigger a fresh Cincinnati pull.
I'm not sure if there's another recovery method...

[1]: openshift/cincinnati-graph-data#4524
[2]: openshift/cincinnati-graph-data#4528
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants