-
Notifications
You must be signed in to change notification settings - Fork 65
MCO-958: Blocking edges to 4.14.2+ and 4.13.25+ #4524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MCO-958: Blocking edges to 4.14.2+ and 4.13.25+ #4524
Conversation
|
@LalatenduMohanty: This pull request references MCO-958 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.16.0" version, but no target version was set. DetailsIn response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
2ff0319 to
ff606a8
Compare
| url: https://issues.redhat.com/browse/MCO-958 | ||
| name: NewMachieSet-ForWorkerNode-NotWorking | ||
| message: |- | ||
| Adding a new worker node will fail for clusters running on ARO. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this may affect nodes on reboot but we don't know it yet, anyway this is fine for now and we can amend it later.
39f2c02 to
276c677
Compare
| to: 4.13.25 | ||
| from: .* | ||
| url: https://issues.redhat.com/browse/MCO-958 | ||
| name: AROBrokenDNSMasq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| name: AROBrokenDNSMasq | |
| name: brokenARODNSMasq |
seems like not being camel case is breaking the CI job.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe I have fixed the formatting issue which was causing the tests to fail. https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_cincinnati-graph-data/4524/pull-ci-openshift-cincinnati-graph-data-master-validate-blocked-edges/1735761018186895360 , the camecase issue was in the past but it is not there anymore.
276c677 to
4b8b63f
Compare
Signed-off-by: Lalatendu Mohanty <[email protected]>
4b8b63f to
ba09198
Compare
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: LalatenduMohanty, sdodson The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates round that does anything useful has a fresh Cincinnati pull" to "some syncAvailableUpdates rounds have a fresh Cincinnati pull, but others just re-eval some Recommended=Unknown conditional updates". Then syncAvailableUpdates calls setAvailableUpdates. However, until this commit, setAvailableUpdates had been bumping LastAttempt every time, even in the just-re-eval conditional updates" case. That meant we never tripped the: } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) { klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339)) condition to trigger a fresh Cincinnati pull. Which could lead to deadlocks like: 1. Cincinnati serves vulnerable PromQL, like [1]. 2. Clusters pick up that broken PromQL, try to evaluate, and fail. Re-eval-and-fail loop continues. 3. Cincinnati PromQL fixed, like [2]. 4. Cases: a. Before 965bfb2, and also after this commit, Clusters pick up the fixed PromQL, try to evaluate, and start succeeding. Hooray! b. Clusters with 965bfb2 but without this commit say "it's been a long time since we pulled fresh Cincinanti information, but it has not been long since my last attempt to evel this broken PromQL, so let me skip the Cincinnati pull and re-eval that old PromQL", which fails. Re-eval-and-fail loop continues. To break out of 4.b, clusters on impacted releases can roll their CVO pod: $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod which will clear out LastAttempt and trigger a fresh Cincinnati pull. I'm not sure if there's another recovery method... [1]: openshift/cincinnati-graph-data#4524 [2]: openshift/cincinnati-graph-data#4528
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates round that does anything useful has a fresh Cincinnati pull" to "some syncAvailableUpdates rounds have a fresh Cincinnati pull, but others just re-eval some Recommended=Unknown conditional updates". Then syncAvailableUpdates calls setAvailableUpdates. However, until this commit, setAvailableUpdates had been bumping LastAttempt every time, even in the just-re-eval conditional updates" case. That meant we never tripped the: } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) { klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339)) condition to trigger a fresh Cincinnati pull. Which could lead to deadlocks like: 1. Cincinnati serves vulnerable PromQL, like [1]. 2. Clusters pick up that broken PromQL, try to evaluate, and fail. Re-eval-and-fail loop continues. 3. Cincinnati PromQL fixed, like [2]. 4. Cases: a. Before 965bfb2, and also after this commit, Clusters pick up the fixed PromQL, try to evaluate, and start succeeding. Hooray! b. Clusters with 965bfb2 but without this commit say "it's been a long time since we pulled fresh Cincinanti information, but it has not been long since my last attempt to eval this broken PromQL, so let me skip the Cincinnati pull and re-eval that old PromQL", which fails. Re-eval-and-fail loop continues. To break out of 4.b, clusters on impacted releases can roll their CVO pod: $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod which will clear out LastAttempt and trigger a fresh Cincinnati pull. I'm not sure if there's another recovery method... [1]: openshift/cincinnati-graph-data#4524 [2]: openshift/cincinnati-graph-data#4528
Miguel points out that the exposure set is more complicated [1] than what I'd done in 45eb9ea (blocked-edges/4.14*: Declare AzureDefaultVMType, openshift#4541). It's: * Azure born in 4.8 or earlier are exposed. Both ARO (which creates clusters with Hive?) and clusters created via openshift-installer. * ARO clusters created in 4.13 and earlier are exposed. Generated by updating the 4.14.1 risk by hand, and then running: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done Breaking down the logic for my new PromQL: a. First stanza, using topk is likely unecessary, but if we do happen to have multiple matches for some reason, we'll take the highest. That gives us a "we match" 1 (if any aggregated entries were 1) or a "we don't match" (if they were all 0), instead of "we're having a hard time figuring out" Recommended=Unknown. a. If the cluster is ARO (using cluster_operator_conditions, as in ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15, openshift#4524), first stanza is 1. Otherwise, 'or' falls back to... b. Nested block, again with the cautious topk: a. If there are no cluster_operator_conditions, don't return a time series. This ensures that "we didn't match a.a, but we might be ARO, and just have cluster_operator_conditions aggregation broken" gives us a Recommended=Unknown evaluation failure. b. Nested block, again with the cautious topk: a. born_by_4_9 yes case, with 4.(<=9) instead of the desired 4.(<=8) because of the "old CVO bugs make it hard to distinguish between 4.(<=9) birth-versions" issue discussed in 034fa01 (blocked-edges/4.12.*: Declare AWSOldBootImages, 2022-12-14, openshift#2909). Otherwise, 'or' falls back to... b. A check to ensure cluster_version{type="initial"} is working. This ensures that "we didn't match the a.b.b.a born_by_4_9 yes case, but we be that old, and just have cluster_version aggregation broken" gives us a Recommended=Unknown evaluation failure. b. Second stanza, again with the cautious topk: a. cluster_infrastructure_provider is Azure. Otherwise, 'or' falls back to... b. If there are no cluster_infrastructure_provider, don't return a time series. This ensures that "we didn't match b.a, but we might be Azure, and just have cluster_infrastructure_provider aggregation broken" gives us a Recommended=Unknown evaluation failure. So walking some cases: * Non-Azure cluster, cluster_operator_conditions, cluster_version, and cluster_infrastructure_provider all working: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b could be 1 or 0 for cluster_version. * b.a matches no series (not Azure). * b.b gives 0 (confirming cluster_infrastructure_provider is working). * (1 or 0) * 0 = 0, cluster does not match. * Non-Azure cluster, cluster_version is broken: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b matches no series (cluster_version is broken). * b.a matches no series (not Azure). * b.b gives 0 (confirming cluster_infrastructure_provider is working). * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown. Admin gets to figure out what's broken with cluster_version and/or manually assess their exposure based on the message and linked URI. * Non-ARO Azure cluster born in 4.9, all time-series working: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b.a matches born_by_4_9 yes. * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster born in 4.9, all time-series working: * a.a matches (ARO). * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster born in 4.13, all time-series working (this is the case I'm fixing with this commit): * a.a matches (ARO). * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster, cluster_operator_conditions is broken. * a.a matches no series (cluster_operator_conditions) is broken. * a.b.a matches no series (cluster_operator_conditions) is broken. * b.a matches (Azure) * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown. * ARO cluster, cluster_infrastructure_provider is broken. * a.a matches (ARO). * b.a matches no series (cluster_infrastructure_provider) is broken. * b.b matches no series (cluster_infrastructure_provider) is broken. * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown. We could add logic like a cluster_operator_conditions{name="aro"} check to the (b) stanza if we wanted to bakein "all ARO clusters are Azure" knowledge to successfully evaluate this case. But I'd guess cluster_infrastructure_provider is working in most ARO clusters, and this PromQL is already complicated enough, so I haven't bothered with that level of tuning. * ...lots of other combinations... [1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
Miguel points out that the exposure set is more complicated [1] than what I'd done in 45eb9ea (blocked-edges/4.14*: Declare AzureDefaultVMType, openshift#4541). It's: * Azure born in 4.8 or earlier are exposed. Both ARO (which creates clusters with Hive?) and clusters created via openshift-installer. * ARO clusters created in 4.13 and earlier are exposed. Generated by updating the 4.14.1 risk by hand, and then running: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done Breaking down the logic for my new PromQL: a. First stanza, using topk is likely unecessary, but if we do happen to have multiple matches for some reason, we'll take the highest. That gives us a "we match" 1 (if any aggregated entries were 1) or a "we don't match" (if they were all 0), instead of "we're having a hard time figuring out" Recommended=Unknown. a. If the cluster is ARO (using cluster_operator_conditions, as in ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15, openshift#4524), first stanza is 1. Otherwise, 'or' falls back to... b. Nested block, again with the cautious topk: a. If there are no cluster_operator_conditions, don't return a time series. This ensures that "we didn't match a.a, but we might be ARO, and just have cluster_operator_conditions aggregation broken" gives us a Recommended=Unknown evaluation failure. b. Nested block, again with the cautious topk: a. born_by_4_9 yes case, with 4.(<=9) instead of the desired 4.(<=8) because of the "old CVO bugs make it hard to distinguish between 4.(<=9) birth-versions" issue discussed in 034fa01 (blocked-edges/4.12.*: Declare AWSOldBootImages, 2022-12-14, openshift#2909). Otherwise, 'or' falls back to... b. A check to ensure cluster_version{type="initial"} is working. This ensures that "we didn't match the a.b.b.a born_by_4_9 yes case, but we be that old, and just have cluster_version aggregation broken" gives us a Recommended=Unknown evaluation failure. b. Second stanza, again with the cautious topk: a. cluster_infrastructure_provider is Azure. Otherwise, 'or' falls back to... b. If there are no cluster_infrastructure_provider, don't return a time series. This ensures that "we didn't match b.a, but we might be Azure, and just have cluster_infrastructure_provider aggregation broken" gives us a Recommended=Unknown evaluation failure. So walking some cases: * Non-Azure cluster, cluster_operator_conditions, cluster_version, and cluster_infrastructure_provider all working: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b could be 1 or 0 for cluster_version. * b.a matches no series (not Azure). * b.b gives 0 (confirming cluster_infrastructure_provider is working). * (1 or 0) * 0 = 0, cluster does not match. * Non-Azure cluster, cluster_version is broken: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b matches no series (cluster_version is broken). * b.a matches no series (not Azure). * b.b gives 0 (confirming cluster_infrastructure_provider is working). * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown. Admin gets to figure out what's broken with cluster_version and/or manually assess their exposure based on the message and linked URI. * Non-ARO Azure cluster born in 4.9, all time-series working: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b.a matches born_by_4_9 yes. * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster born in 4.9, all time-series working: * a.a matches (ARO). * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster born in 4.13, all time-series working (this is the case I'm fixing with this commit): * a.a matches (ARO). * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster, cluster_operator_conditions is broken. * a.a matches no series (cluster_operator_conditions) is broken. * a.b.a matches no series (cluster_operator_conditions) is broken. * b.a matches (Azure) * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown. * ARO cluster, cluster_infrastructure_provider is broken. * a.a matches (ARO). * b.a matches no series (cluster_infrastructure_provider) is broken. * b.b matches no series (cluster_infrastructure_provider) is broken. * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown. We could add logic like a cluster_operator_conditions{name="aro"} check to the (b) stanza if we wanted to bakein "all ARO clusters are Azure" knowledge to successfully evaluate this case. But I'd guess cluster_infrastructure_provider is working in most ARO clusters, and this PromQL is already complicated enough, so I haven't bothered with that level of tuning. * ...lots of other combinations... [1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
Miguel points out that the exposure set is more complicated [1] than what I'd done in 45eb9ea (blocked-edges/4.14*: Declare AzureDefaultVMType, openshift#4541). It's: * Azure born in 4.8 or earlier are exposed. Both ARO (which creates clusters with Hive?) and clusters created via openshift-installer. * ARO clusters created in 4.13 and earlier are exposed. Generated by updating the 4.14.1 risk by hand, and then running: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=candidate-4.14&arch=amd64' | jq -r '.nodes[] | .version' | grep '^4[.]14[.]' | grep -v '^4[.]14[.][01]$' | while read VERSION; do sed "s/4.14.1/${VERSION}/" blocked-edges/4.14.1-AzureDefaultVMType.yaml > "blocked-edges/${VERSION}-AzureDefaultVMType.yaml"; done Breaking down the logic for my new PromQL: a. First stanza, using topk is likely unecessary, but if we do happen to have multiple matches for some reason, we'll take the highest. That gives us a "we match" 1 (if any aggregated entries were 1) or a "we don't match" (if they were all 0), instead of "we're having a hard time figuring out" Recommended=Unknown. a. If the cluster is ARO (using cluster_operator_conditions, as in ba09198 (MCO-958: Blocking edges to 4.14.2+ and 4.13.25+, 2023-12-15, openshift#4524), first stanza is 1. Otherwise, 'or' falls back to... b. Nested block, again with the cautious topk: a. If there are no cluster_operator_conditions, don't return a time series. This ensures that "we didn't match a.a, but we might be ARO, and just have cluster_operator_conditions aggregation broken" gives us a Recommended=Unknown evaluation failure. b. Nested block, again with the cautious topk: a. born_by_4_9 yes case, with 4.(<=9) instead of the desired 4.(<=8) because of the "old CVO bugs make it hard to distinguish between 4.(<=9) birth-versions" issue discussed in 034fa01 (blocked-edges/4.12.*: Declare AWSOldBootImages, 2022-12-14, openshift#2909). Otherwise, 'or' falls back to... b. A check to ensure cluster_version{type="initial"} is working. This ensures that "we didn't match the a.b.b.a born_by_4_9 yes case, but we be that old, and just have cluster_version aggregation broken" gives us a Recommended=Unknown evaluation failure. b. Second stanza, again with the cautious topk: a. cluster_infrastructure_provider is Azure. Otherwise, 'or' falls back to... b. If there are no cluster_infrastructure_provider, don't return a time series. This ensures that "we didn't match b.a, but we might be Azure, and just have cluster_infrastructure_provider aggregation broken" gives us a Recommended=Unknown evaluation failure. All of the _id filtering is for use in hosted clusters or other PromQL stores that include multiple clusters. More background in 5cb2e93 (blocked-edges/4.11.*-KeepalivedMulticastSkew: Explicit _id="", 2023-05-09, openshift#3591). So walking some cases: * Non-Azure cluster, cluster_operator_conditions, cluster_version, and cluster_infrastructure_provider all working: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b could be 1 or 0 for cluster_version. * b.a matches no series (not Azure). * b.b gives 0 (confirming cluster_infrastructure_provider is working). * (1 or 0) * 0 = 0, cluster does not match. * Non-Azure cluster, cluster_version is broken: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b matches no series (cluster_version is broken). * b.a matches no series (not Azure). * b.b gives 0 (confirming cluster_infrastructure_provider is working). * (no-match) * 0 = no-match, evaluation fails, Recommended=Unknown. Admin gets to figure out what's broken with cluster_version and/or manually assess their exposure based on the message and linked URI. * Non-ARO Azure cluster born in 4.9, all time-series working: * a.a matches no series (not ARO). Fall back to... * a.b.a confirms cluster_operator_conditions is working. * a.b.b.a matches born_by_4_9 yes. * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster born in 4.9, all time-series working: * a.a matches (ARO). * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster born in 4.13, all time-series working (this is the case I'm fixing with this commit): * a.a matches (ARO). * b.a matches (Azure). * 1 * 1 = 1, cluster matches. * ARO cluster, cluster_operator_conditions is broken. * a.a matches no series (cluster_operator_conditions) is broken. * a.b.a matches no series (cluster_operator_conditions) is broken. * b.a matches (Azure) * (no-match) * 1 = no-match, evaluation fails, Recommended=Unknown. * ARO cluster, cluster_infrastructure_provider is broken. * a.a matches (ARO). * b.a matches no series (cluster_infrastructure_provider) is broken. * b.b matches no series (cluster_infrastructure_provider) is broken. * 1 * (no-match) = no-match, evaluation fails, Recommended=Unknown. We could add logic like a cluster_operator_conditions{name="aro"} check to the (b) stanza if we wanted to bakein "all ARO clusters are Azure" knowledge to successfully evaluate this case. But I'd guess cluster_infrastructure_provider is working in most ARO clusters, and this PromQL is already complicated enough, so I haven't bothered with that level of tuning. * ...lots of other combinations... [1]: https://issues.redhat.com/browse/OCPCLOUD-2409?focusedId=23694976&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23694976
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates round that does anything useful has a fresh Cincinnati pull" to "some syncAvailableUpdates rounds have a fresh Cincinnati pull, but others just re-eval some Recommended=Unknown conditional updates". Then syncAvailableUpdates calls setAvailableUpdates. However, until this commit, setAvailableUpdates had been bumping LastAttempt every time, even in the just-re-eval conditional updates" case. That meant we never tripped the: } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) { klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339)) condition to trigger a fresh Cincinnati pull. Which could lead to deadlocks like: 1. Cincinnati serves vulnerable PromQL, like [1]. 2. Clusters pick up that broken PromQL, try to evaluate, and fail. Re-eval-and-fail loop continues. 3. Cincinnati PromQL fixed, like [2]. 4. Cases: a. Before 965bfb2, and also after this commit, Clusters pick up the fixed PromQL, try to evaluate, and start succeeding. Hooray! b. Clusters with 965bfb2 but without this commit say "it's been a long time since we pulled fresh Cincinanti information, but it has not been long since my last attempt to eval this broken PromQL, so let me skip the Cincinnati pull and re-eval that old PromQL", which fails. Re-eval-and-fail loop continues. To break out of 4.b, clusters on impacted releases can roll their CVO pod: $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod which will clear out LastAttempt and trigger a fresh Cincinnati pull. I'm not sure if there's another recovery method... [1]: openshift/cincinnati-graph-data#4524 [2]: openshift/cincinnati-graph-data#4528
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates round that does anything useful has a fresh Cincinnati pull" to "some syncAvailableUpdates rounds have a fresh Cincinnati pull, but others just re-eval some Recommended=Unknown conditional updates". Then syncAvailableUpdates calls setAvailableUpdates. However, until this commit, setAvailableUpdates had been bumping LastAttempt every time, even in the just-re-eval conditional updates" case. That meant we never tripped the: } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) { klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339)) condition to trigger a fresh Cincinnati pull. Which could lead to deadlocks like: 1. Cincinnati serves vulnerable PromQL, like [1]. 2. Clusters pick up that broken PromQL, try to evaluate, and fail. Re-eval-and-fail loop continues. 3. Cincinnati PromQL fixed, like [2]. 4. Cases: a. Before 965bfb2, and also after this commit, Clusters pick up the fixed PromQL, try to evaluate, and start succeeding. Hooray! b. Clusters with 965bfb2 but without this commit say "it's been a long time since we pulled fresh Cincinanti information, but it has not been long since my last attempt to eval this broken PromQL, so let me skip the Cincinnati pull and re-eval that old PromQL", which fails. Re-eval-and-fail loop continues. To break out of 4.b, clusters on impacted releases can roll their CVO pod: $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod which will clear out LastAttempt and trigger a fresh Cincinnati pull. I'm not sure if there's another recovery method... [1]: openshift/cincinnati-graph-data#4524 [2]: openshift/cincinnati-graph-data#4528
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates round that does anything useful has a fresh Cincinnati pull" to "some syncAvailableUpdates rounds have a fresh Cincinnati pull, but others just re-eval some Recommended=Unknown conditional updates". Then syncAvailableUpdates calls setAvailableUpdates. However, until this commit, setAvailableUpdates had been bumping LastAttempt every time, even in the just-re-eval conditional updates" case. That meant we never tripped the: } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) { klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339)) condition to trigger a fresh Cincinnati pull. Which could lead to deadlocks like: 1. Cincinnati serves vulnerable PromQL, like [1]. 2. Clusters pick up that broken PromQL, try to evaluate, and fail. Re-eval-and-fail loop continues. 3. Cincinnati PromQL fixed, like [2]. 4. Cases: a. Before 965bfb2, and also after this commit, Clusters pick up the fixed PromQL, try to evaluate, and start succeeding. Hooray! b. Clusters with 965bfb2 but without this commit say "it's been a long time since we pulled fresh Cincinanti information, but it has not been long since my last attempt to eval this broken PromQL, so let me skip the Cincinnati pull and re-eval that old PromQL", which fails. Re-eval-and-fail loop continues. To break out of 4.b, clusters on impacted releases can roll their CVO pod: $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod which will clear out LastAttempt and trigger a fresh Cincinnati pull. I'm not sure if there's another recovery method... [1]: openshift/cincinnati-graph-data#4524 [2]: openshift/cincinnati-graph-data#4528
965bfb2 (pkg/cvo/availableupdates: Requeue risk evaluation on failure, 2023-09-18, openshift#939) pivoted from "every syncAvailableUpdates round that does anything useful has a fresh Cincinnati pull" to "some syncAvailableUpdates rounds have a fresh Cincinnati pull, but others just re-eval some Recommended=Unknown conditional updates". Then syncAvailableUpdates calls setAvailableUpdates. However, until this commit, setAvailableUpdates had been bumping LastAttempt every time, even in the just-re-eval conditional updates" case. That meant we never tripped the: } else if !optrAvailableUpdates.RecentlyChanged(optr.minimumUpdateCheckInterval) { klog.V(2).Infof("Retrieving available updates again, because more than %s has elapsed since %s", optr.minimumUpdateCheckInterval, optrAvailableUpdates.LastAttempt.Format(time.RFC3339)) condition to trigger a fresh Cincinnati pull. Which could lead to deadlocks like: 1. Cincinnati serves vulnerable PromQL, like [1]. 2. Clusters pick up that broken PromQL, try to evaluate, and fail. Re-eval-and-fail loop continues. 3. Cincinnati PromQL fixed, like [2]. 4. Cases: a. Before 965bfb2, and also after this commit, Clusters pick up the fixed PromQL, try to evaluate, and start succeeding. Hooray! b. Clusters with 965bfb2 but without this commit say "it's been a long time since we pulled fresh Cincinanti information, but it has not been long since my last attempt to eval this broken PromQL, so let me skip the Cincinnati pull and re-eval that old PromQL", which fails. Re-eval-and-fail loop continues. To break out of 4.b, clusters on impacted releases can roll their CVO pod: $ oc -n openshift-cluster-version delete -l k8s-app=cluster-version-operator pod which will clear out LastAttempt and trigger a fresh Cincinnati pull. I'm not sure if there's another recovery method... [1]: openshift/cincinnati-graph-data#4524 [2]: openshift/cincinnati-graph-data#4528
No description provided.