-
Notifications
You must be signed in to change notification settings - Fork 2.1k
using yq and oc installed from executor images #31256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using yq and oc installed from executor images #31256
Conversation
|
@wking @patrickdillon just like what we discussed in slack, I add me and @yunjiang29 as approvers for some install steps in this PR, pls help review. |
|
/lgtm |
| @@ -1,9 +1,6 @@ | |||
| ref: | |||
| as: aws-deprovision-cco-manual-users-static | |||
| from_image: | |||
| namespace: ocp | |||
| name: "4.5" | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't you need to pin a recent 4.y if you want to rely on openshift/installer#6008? Otherwise 4.8 flows using this step, etc., will fail to find the binary, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, I am a bit confuse with the difference between:
from: upi-installer
v.s
from_image:
namespace: ocp
name: "4.5"
tag: upi-installer
My understanding is from: upi-installer will rely on the prow ci jobs where defined upi-installer, right? If yes, when some jobs did not defined upi-installer, the job referencing the step will be broken as what you said. If that, I probably need to pin it to 4.12.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the specific step, presently it is only consumed by QE's jobs, the job we defined upi-installer like https://github.com/openshift/release/blob/master/ci-operator/config/openshift/verification-tests/openshift-verification-tests-master__installer-rehearse-4.12.yaml#L14-L17, that is supposed to works in 4.12 prow jobs. Of course, if we want to make the steps working in broader jobs, maybe it is most safe to pin it to 4.12 tag. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed back to use a pinned-version (4.12) upi-installer image for the step.
| @@ -1,9 +1,6 @@ | |||
| ref: | |||
| as: ipi-conf-aws-blackholenetwork | |||
| from_image: | |||
| namespace: ocp | |||
| name: "4.5" | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
13cec2f (#9791) explains the history of aws availability in early 4.y upi-installer images. If we want to drop this pin here, we'd be breaking this step for 4.4, etc. Which we're maybe ok with now? But if so, we should call that out explicitly in the commit message, so folks realize we're intentionally breaking the use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed back to use a pinned-version (4.12) upi-installer image for the step.
|
The failed jobs are because for e2e testing part, not relevant with the change in this PR. |
|
@jianlinliu: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/approve |
wking
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
@jianlinliu: Updated the
DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The OCM team is concerned about the volume of data submitted by ephemeral CI clusters. This commit disables Telemetry by default (for the bulk of CI, which flows through this step), while still allowing Telemetry to be enabled for jobs that have important data to report, or jobs that want to excersise the Telemetry or Insights reporting logic. From [1]: > You can modify your existing global cluster pull secret to disable > remote health reporting. This disables both Telemetry and the > Insights Operator. In 4.12, the Insights operator is growing a new configuration structure that allows disabling Insights [1,2] without affecting Telemetry. And it turns out that the monitoring operator has a similar option to disable Telemetry [3], although the only documented use for telemeterClient is for setting nodeSelector [4]. There are a number of existing steps poking at the cluster-monitoring-config ConfigMap, including ipi-conf-inframachineset setting telemeterClient's nodeSelector, so I am following ipi-conf-user-workload-monitoring's use of yq patching and manifest naming to fit in with the other steps. I'm following 8a4696c (using yq installed from upi-installer image, 2022-08-25, openshift#31256) to get yq from the upi-installer image instead of curling it down dynamically. With the 4.12 pin to get `yq`, we're safe using `upi-installer`, even for CI jobs that are otherwise looking at different 4.y. [1]: https://github.com/openshift/enhancements/blob/ef85659d01738b9f89958d5f0da31cff05bb1182/enhancements/insights/insights-config-api.md [2]: https://docs.openshift.com/container-platform/4.11/support/remote_health_monitoring/opting-out-of-remote-health-reporting.html [3]: https://github.com/openshift/cluster-monitoring-operator/blob/8d331d78b22948d36c20da0552763ddd8a4e2093/pkg/manifests/config.go#L337 [4]: https://docs.openshift.com/container-platform/4.11/monitoring/configuring-the-monitoring-stack.html#moving-monitoring-components-to-different-nodes_configuring-the-monitoring-stack
…try (#32153) * ci-operator/step-registry/ipi/conf/telemetry: Disable Telemetry The OCM team is concerned about the volume of data submitted by ephemeral CI clusters. This commit disables Telemetry by default (for the bulk of CI, which flows through this step), while still allowing Telemetry to be enabled for jobs that have important data to report, or jobs that want to excersise the Telemetry or Insights reporting logic. From [1]: > You can modify your existing global cluster pull secret to disable > remote health reporting. This disables both Telemetry and the > Insights Operator. In 4.12, the Insights operator is growing a new configuration structure that allows disabling Insights [1,2] without affecting Telemetry. And it turns out that the monitoring operator has a similar option to disable Telemetry [3], although the only documented use for telemeterClient is for setting nodeSelector [4]. There are a number of existing steps poking at the cluster-monitoring-config ConfigMap, including ipi-conf-inframachineset setting telemeterClient's nodeSelector, so I am following ipi-conf-user-workload-monitoring's use of yq patching and manifest naming to fit in with the other steps. I'm following 8a4696c (using yq installed from upi-installer image, 2022-08-25, #31256) to get yq from the upi-installer image instead of curling it down dynamically. With the 4.12 pin to get `yq`, we're safe using `upi-installer`, even for CI jobs that are otherwise looking at different 4.y. [1]: https://github.com/openshift/enhancements/blob/ef85659d01738b9f89958d5f0da31cff05bb1182/enhancements/insights/insights-config-api.md [2]: https://docs.openshift.com/container-platform/4.11/support/remote_health_monitoring/opting-out-of-remote-health-reporting.html [3]: https://github.com/openshift/cluster-monitoring-operator/blob/8d331d78b22948d36c20da0552763ddd8a4e2093/pkg/manifests/config.go#L337 [4]: https://docs.openshift.com/container-platform/4.11/monitoring/configuring-the-monitoring-stack.html#moving-monitoring-components-to-different-nodes_configuring-the-monitoring-stack * ci-operator/step-registry: Inject ipi-conf-telemetry after ipi-conf Generated with: $ sed -i 's/^\( *- ref: ipi-conf\)$/\1\n\1-telemetry/' $(git grep -l '^ *- ref: ipi-conf$' ci-operator/step-registry) to slot the new step in after the generic ipi-conf, so we can configure Telemetry in all of the existing chains and workflows that were flowing through ipi-conf. * ci-operator/config/openshift/telemeter: TELEMETER_ENABLED=true When making changes to the Telemeter client, we want to ensure that we're still excercising uploads.
yqin upi installer containers installer#6008 to use yq binary installed from upi-installer imageworkers-rhel-reposstep, also saw several oc downloading issues, injectcli: latestto enhance it. e.g: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/31307/rehearse-31307-periodic-ci-openshift-verification-tests-master-installer-rehearse-4.12-installer-rehearse-aws/1560142508833378304