-
Notifications
You must be signed in to change notification settings - Fork 2.1k
ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc. #45245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc. #45245
Conversation
|
@wking, Interacting with pj-rehearseComment: Once you are satisfied with the results of the rehearsals, comment: |
4bc0546 to
62c9561
Compare
|
@wking, Interacting with pj-rehearseComment: Once you are satisfied with the results of the rehearsals, comment: |
62c9561 to
e52244c
Compare
|
@wking, Interacting with pj-rehearseComment: Once you are satisfied with the results of the rehearsals, comment: |
c8753f8 to
0ce2099
Compare
|
/pj-rehearse |
0ce2099 to
94141f8
Compare
|
/pj-rehearse |
94141f8 to
52ca57c
Compare
…p-published-graph-data, etc. Moving to a recent Go builder, based on [1] and: $ oc -n ocp get -o json imagestream builder | jq -r '.status.tags[] | select(.items | length > 0) | .items[0].created + " " + .tag' | sort | grep golang ... 2023-11-02T19:53:15Z rhel-8-golang-1.18-openshift-4.11 2023-11-02T19:53:23Z rhel-8-golang-1.17-openshift-4.11 2023-11-02T20:49:19Z rhel-8-golang-1.19-openshift-4.13 2023-11-02T20:49:25Z rhel-9-golang-1.19-openshift-4.13 2023-11-02T21:54:25Z rhel-9-golang-1.20-openshift-4.14 2023-11-02T21:54:46Z rhel-8-golang-1.20-openshift-4.14 2023-11-02T21:55:24Z rhel-8-golang-1.19-openshift-4.14 2023-11-02T21:55:29Z rhel-9-golang-1.19-openshift-4.14 I'd tried dropping the build_root stanza, because we didn't seem to need the functionality it delivers [2]. But that removal caused failures like [3]: Failed to load CI Operator configuration" error="invalid ci-operator config: invalid configuration: when 'images' are specified 'build_root' is required and must have image_stream_tag, project_image or from_repository set" source-file=ci-operator/config/openshift/cincinnati-operator/openshift-cincinnati-operator-master.yaml And [2] docs a need for Git, which apparently the UBI images don't have. So I'm using a Go image here still, even though we don't need Go, and although that means some tedious bumping to keep up with RHEL and Go versions instead of floating. The operators stanza doc'ed in [4] remains largely unchanged, although I did rename 'cincinnati_operand_latest' to 'cincinnati-operand', because these tests use a single operand image, and there is no need to distinguish between multiple operand images with "latest". The image used for operator-sdk (which I bump to an OpenShift 4.14 base) and its use are doc'ed in [5]. The 4.14 cluster-claim pool I'm transitioning to is listed as healthy in [6]. For the end-to-end tests, we install the operator via the test suite, so we do not need the SDK bits. I've dropped OPERATOR_IMAGE, because we are well past the transition initiated by eae9d38 (ci-operator/config/openshift/cincinnati-operator: Set RELATED_IMAGE_*, 2021-04-05, openshift#17435) and openshift/cincinnati-operator@799d18525b (Changing the name to make OSBS auto repo/registry replacements to work, 2021-04-06, openshift/cincinnati-operator#104). I'm consistently using the current Cincinnati operand instead of the pinned one, because we ship the OpenShift Update Service Operator as a bundle with the operator and operand, and while it might be useful to grow update-between-OSUS-releases test coverage, we do not expect long durations of new operators coexisting with old-image operand pods. And we never expect new operators to touch Deployments with old operand images, except to bump them to new operand images. We'd been using digest-pinned operand images here since efcafb6 (ci-operator/config/openshift/cincinnati-operator: Move e2e-operator to multi-step, 2020-10-06, openshift#12486), where I said: In a future pivot we'll pull the operand image out of CI too, instead of hard-coding. But with this change we at least move the hard-coding into the CI repository. 4f46d7e (cincinnati-operator: test operator against released OSUS version and latest master, 2022-01-11, openshift#25152) brought in that floating operand image, but neglected, for reasons that I am not clear on, did not drop the digest-pinned operand. I'm dropping it now. With "which operand image" removed as a differentiator, the remaining differentiators for the end-to-end tests are: * Which host OpenShift? * To protect from "new operators require new platform capabilities not present in older OpenShift releases", we have an old-ocp job. It's currently 4.11 for the oldest supported release [7]. * To protect from "new operators still use platform capabilities that have been removed from development branches of OpenShift", we have a new-ocp job. It's currently 4.14, as the most modern openshift-ci pool in [6], but if there was a 4.15 openshift-ci pool I'd us that to ensure we work on dev-branch engineering candidates like 4.15.0-ec.1. * To protect against "HyperShift does something the operator does not expect", we have a hypershift job. I'd prefer to defer "which version?" to the workflow, because we do not expect HyperShift-specific difference to evolve much between 4.y releases, while the APIs used by the operator (Deployments, Services, Routes, etc.) might. But perhaps I'm wrong, and we will see more API evolution during HyperShift minor versions. And in any case, today 4.14 fails with [8]: Unable to apply 4.14.1: some cluster operators are not available so in the short term I'm going with 4.13, but with a generic name so we only have to bump one place as HyperShift support improves. * I'm not worrying about enumerating all the current 4.y options like we had done before. That is more work to maintain, and renaming required jobs confuses Prow and requires an /override of the removed job. It seems unlikely that we work on 4.old, break on some 4.middle, and work again on 4.dev. Again, we can always revisit this if we change our minds about the exposure. * Which graph-data? * To protect against "I updated my OSUS without changing the graph-data image, and it broke", we have published-graph-data jobs. These consume images that were built by previous postsubmits in the cincinnati-graph-data repository. * We could theoretically also add coverage for older forms of graph-data images we suspect customers might be using. I'm punting this kind of thing to possible future work, if we decide the exposure is significant enough to warrant ongoing CI coverage. * To allow testing new features like serving signatures, we have a local-graph-data job. This consumes a graph-data image built from steps in the operator repository, allowing convenient testing of changes that simultaneously tweak the operator and how the graph-data image is built. For example, [9] injects an image signature into graph-data, and updates graph-data to serve it. I'm setting a GRAPH_DATA environment variable to 'local' to allow the test suite to easily distinguish this case. [1]: https://docs.ci.openshift.org/docs/architecture/images/#ci-images [2]: https://docs.ci.openshift.org/docs/architecture/ci-operator/#build-root-image [3]: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/45245/pull-ci-openshift-release-master-generated-config/1720218786344210432 [4]: https://docs.ci.openshift.org/docs/how-tos/testing-operator-sdk-operators/#building-operator-bundles [5]: https://docs.ci.openshift.org/docs/how-tos/testing-operator-sdk-operators/#simple-operator-installation [6]: https://docs.ci.openshift.org/docs/how-tos/cluster-claim/#existing-cluster-pools [7]: https://access.redhat.com/support/policy/updates/openshift/#dates [8]: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/45245/rehearse-45245-pull-ci-openshift-cincinnati-operator-master-operator-e2e-hypershift-local-graph-data/1720287506777247744 [9]: openshift/cincinnati-operator#176
52ca57c to
23d9346
Compare
|
[REHEARSALNOTIFIER]
Interacting with pj-rehearseComment: Once you are satisfied with the results of the rehearsals, comment: |
|
/pj-rehearse |
|
All green :) /pj-rehearse ack |
|
/cc |
|
/hold I'm very likely to LGMT this but I think we have opportunity to simplify a bit further, need to check out something (will not hold for long) |
petr-muller
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I thought I can simplify this a bit but I cannot. I am leaving the hold in place b/c @LalatenduMohanty wanted to take a look, but feel free to lift the hold anytime.
|
/hold cancel |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: LalatenduMohanty, petr-muller, wking The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@wking: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Catching up with openshift/cincinnati@efe98dcbbbc6 (add metadata-helper deployments, 2023-07-18, openshift/cincinnati#816), allowing users to retrieve signatures from the metadata Route. For signatures provided via the graph-data image, this will provide a more convenient access than pushing signature ConfigMaps to individual clusters. [1] is in flight with a proposed mechanism to configure clusters to consume this signature-metadata endpoint. I'm using the multi-arch 4.13.0 as the example release for signatures: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=stable-4.13&arch=multi' | jq -r '.nodes[] | select(.version == "4.13.0").payload' quay.io/openshift-release-dev/ocp-release@sha256:beda83fb057e328d6f94f8415382350ca3ddf99bb9094e262184e0f127810ce0 The signature location in the graph-data image is defined in openshift/cincinnati-graph-data@9e9e97cf2a (README: Define a 1.2.0 filesystem schema for release signatures, 2023-04-19, openshift/cincinnati-graph-data#3509). The GRAPH_DATA local check consumes openshift/release@23d93465e8 (ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc., 2023-11-02, openshift/release#45245), which sets that variable for the operator-e2e-hypershift-local-graph-data presubmit which consumes the graph-data image built from dev/Dockerfile (where we inject the signature we're testing for). The other end-to-end tests will consume external graph-data images (built by cincinnati-graph-data postsubmits), have GRAPH_DATA unset, and expect '404 Not Found' for requests for that signature. [1]: openshift/enhancements#1485
Catching up with openshift/cincinnati@efe98dcbbbc6 (add metadata-helper deployments, 2023-07-18, openshift/cincinnati#816), allowing users to retrieve signatures from the metadata Route. For signatures provided via the graph-data image, this will provide a more convenient access than pushing signature ConfigMaps to individual clusters. [1] is in flight with a proposed mechanism to configure clusters to consume this signature-metadata endpoint. I'm using the multi-arch 4.13.0 as the example release for signatures: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=stable-4.13&arch=multi' | jq -r '.nodes[] | select(.version == "4.13.0").payload' quay.io/openshift-release-dev/ocp-release@sha256:beda83fb057e328d6f94f8415382350ca3ddf99bb9094e262184e0f127810ce0 The signature location in the graph-data image is defined in openshift/cincinnati-graph-data@9e9e97cf2a (README: Define a 1.2.0 filesystem schema for release signatures, 2023-04-19, openshift/cincinnati-graph-data#3509). The GRAPH_DATA local check consumes openshift/release@23d93465e8 (ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc., 2023-11-02, openshift/release#45245), which sets that variable for the operator-e2e-hypershift-local-graph-data presubmit which consumes the graph-data image built from dev/Dockerfile (where we inject the signature we're testing for). The other end-to-end tests will consume external graph-data images (built by cincinnati-graph-data postsubmits), have GRAPH_DATA unset, and expect '404 Not Found' for requests for that signature. [1]: openshift/enhancements#1485
Catching up with openshift/cincinnati@efe98dcbbbc6 (add metadata-helper deployments, 2023-07-18, openshift/cincinnati#816), allowing users to retrieve signatures from the metadata Route. For signatures provided via the graph-data image, this will provide a more convenient access than pushing signature ConfigMaps to individual clusters. [1] is in flight with a proposed mechanism to configure clusters to consume this signature-metadata endpoint. I'm using the multi-arch 4.13.0 as the example release for signatures: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=stable-4.13&arch=multi' | jq -r '.nodes[] | select(.version == "4.13.0").payload' quay.io/openshift-release-dev/ocp-release@sha256:beda83fb057e328d6f94f8415382350ca3ddf99bb9094e262184e0f127810ce0 The signature location in the graph-data image is defined in openshift/cincinnati-graph-data@9e9e97cf2a (README: Define a 1.2.0 filesystem schema for release signatures, 2023-04-19, openshift/cincinnati-graph-data#3509). The GRAPH_DATA local check consumes openshift/release@23d93465e8 (ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc., 2023-11-02, openshift/release#45245), which sets that variable for the operator-e2e-hypershift-local-graph-data presubmit which consumes the graph-data image built from dev/Dockerfile (where we inject the signature we're testing for). The other end-to-end tests will consume external graph-data images (built by cincinnati-graph-data postsubmits), have GRAPH_DATA unset, and expect '404 Not Found' for requests for that signature. [1]: openshift/enhancements#1485
Catching up with openshift/cincinnati@efe98dcbbbc6 (add metadata-helper deployments, 2023-07-18, openshift/cincinnati#816), allowing users to retrieve signatures from the metadata Route. For signatures provided via the graph-data image, this will provide a more convenient access than pushing signature ConfigMaps to individual clusters. [1] is in flight with a proposed mechanism to configure clusters to consume this signature-metadata endpoint. I'm using the multi-arch 4.13.0 as the example release for signatures: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=stable-4.13&arch=multi' | jq -r '.nodes[] | select(.version == "4.13.0").payload' quay.io/openshift-release-dev/ocp-release@sha256:beda83fb057e328d6f94f8415382350ca3ddf99bb9094e262184e0f127810ce0 The signature location in the graph-data image is defined in openshift/cincinnati-graph-data@9e9e97cf2a (README: Define a 1.2.0 filesystem schema for release signatures, 2023-04-19, openshift/cincinnati-graph-data#3509). The GRAPH_DATA local check consumes openshift/release@23d93465e8 (ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc., 2023-11-02, openshift/release#45245), which sets that variable for the operator-e2e-hypershift-local-graph-data presubmit which consumes the graph-data image built from dev/Dockerfile (where we inject the signature we're testing for). The other end-to-end tests will consume external graph-data images (built by cincinnati-graph-data postsubmits), have GRAPH_DATA unset, and expect '404 Not Found' for requests for that signature. [1]: openshift/enhancements#1485
Catching up with openshift/cincinnati@efe98dcbbbc6 (add metadata-helper deployments, 2023-07-18, openshift/cincinnati#816), allowing users to retrieve signatures from the metadata Route. For signatures provided via the graph-data image, this will provide a more convenient access than pushing signature ConfigMaps to individual clusters. [1] is in flight with a proposed mechanism to configure clusters to consume this signature-metadata endpoint. I'm using the multi-arch 4.13.0 as the example release for signatures: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=stable-4.13&arch=multi' | jq -r '.nodes[] | select(.version == "4.13.0").payload' quay.io/openshift-release-dev/ocp-release@sha256:beda83fb057e328d6f94f8415382350ca3ddf99bb9094e262184e0f127810ce0 The signature location in the graph-data image is defined in openshift/cincinnati-graph-data@9e9e97cf2a (README: Define a 1.2.0 filesystem schema for release signatures, 2023-04-19, openshift/cincinnati-graph-data#3509). The GRAPH_DATA local check consumes openshift/release@23d93465e8 (ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc., 2023-11-02, openshift/release#45245), which sets that variable for the operator-e2e-hypershift-local-graph-data presubmit which consumes the graph-data image built from dev/Dockerfile (where we inject the signature we're testing for). The other end-to-end tests will consume external graph-data images (built by cincinnati-graph-data postsubmits), have GRAPH_DATA unset, and expect '404 Not Found' for requests for that signature. [1]: openshift/enhancements#1485
Catching up with openshift/cincinnati@efe98dcbbbc6 (add metadata-helper deployments, 2023-07-18, openshift/cincinnati#816), allowing users to retrieve signatures from the metadata Route. For signatures provided via the graph-data image, this will provide a more convenient access than pushing signature ConfigMaps to individual clusters. [1] is in flight with a proposed mechanism to configure clusters to consume this signature-metadata endpoint. I'm using the multi-arch 4.13.0 as the example release for signatures: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=stable-4.13&arch=multi' | jq -r '.nodes[] | select(.version == "4.13.0").payload' quay.io/openshift-release-dev/ocp-release@sha256:beda83fb057e328d6f94f8415382350ca3ddf99bb9094e262184e0f127810ce0 The signature location in the graph-data image is defined in openshift/cincinnati-graph-data@9e9e97cf2a (README: Define a 1.2.0 filesystem schema for release signatures, 2023-04-19, openshift/cincinnati-graph-data#3509). The GRAPH_DATA local check consumes openshift/release@23d93465e8 (ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc., 2023-11-02, openshift/release#45245), which sets that variable for the operator-e2e-hypershift-local-graph-data presubmit which consumes the graph-data image built from dev/Dockerfile (where we inject the signature we're testing for). The other end-to-end tests will consume external graph-data images (built by cincinnati-graph-data postsubmits), have GRAPH_DATA unset, and expect '404 Not Found' for requests for that signature. [1]: openshift/enhancements#1485
Catching up with openshift/cincinnati@efe98dcbbbc6 (add metadata-helper deployments, 2023-07-18, openshift/cincinnati#816), allowing users to retrieve signatures from the metadata Route. For signatures provided via the graph-data image, this will provide a more convenient access than pushing signature ConfigMaps to individual clusters. [1] is in flight with a proposed mechanism to configure clusters to consume this signature-metadata endpoint. I'm using the multi-arch 4.13.0 as the example release for signatures: $ curl -s 'https://api.openshift.com/api/upgrades_info/graph?channel=stable-4.13&arch=multi' | jq -r '.nodes[] | select(.version == "4.13.0").payload' quay.io/openshift-release-dev/ocp-release@sha256:beda83fb057e328d6f94f8415382350ca3ddf99bb9094e262184e0f127810ce0 The signature location in the graph-data image is defined in openshift/cincinnati-graph-data@9e9e97cf2a (README: Define a 1.2.0 filesystem schema for release signatures, 2023-04-19, openshift/cincinnati-graph-data#3509). The GRAPH_DATA local check consumes openshift/release@23d93465e8 (ci-operator/config/openshift/cincinnati-operator: operator-e2e-old-ocp-published-graph-data, etc., 2023-11-02, openshift/release#45245), which sets that variable for the operator-e2e-hypershift-local-graph-data presubmit which consumes the graph-data image built from dev/Dockerfile (where we inject the signature we're testing for). The other end-to-end tests will consume external graph-data images (built by cincinnati-graph-data postsubmits), have GRAPH_DATA unset, and expect '404 Not Found' for requests for that signature. [1]: openshift/enhancements#1485
Moving to a recent Go builder, based on these docs and:
I'm dropping the
build_rootstanza, because we didn't seem to need the functionality it delivers, and we need a UBI builder for the graph-data image and a Go builder for the operator image.The
operatorsstanza remains largely unchanged, although I did renamecincinnati_operand_latesttocincinnati-operand, because these tests use a single operand image, and there is no need to distinguish between multiple operand images withlatest.The image used for operator-sdk (which I bump to an OpenShift 4.14 base) and its use are doc'ed here. The 4.14 cluster-claim pool I'm transitioning to is listed as healthy.
For the end-to-end tests, we install the operator via the test suite, so we do not need the SDK bits. I've dropped
OPERATOR_IMAGE, because we are well past the transition initiated by eae9d38 (#17435) andopenshift/cincinnati-operator@799d18525b (openshift/cincinnati-operator#104).
I'm consistently using the current Cincinnati operand instead of the pinned one, because we ship the OpenShift Update Service Operator as a bundle with the operator and operand, and while it might be useful to grow update-between-OSUS-releases test coverage, we do not expect long durations of new operators coexisting with old-image operand pods. And we never expect new operators to touch Deployments with old operand images, except to bump them to new operand images. We'd been using digest-pinned operand images here since efcafb6 (#12486), where I said:
4f46d7e (#25152) brought in that floating operand image, but neglected, for reasons that I am not clear on, did not drop the digest-pinned operand. I'm dropping it now.
With "which operand image" removed as a differentiator, the remaining differentiators for the end-to-end tests are:
Which host OpenShift?
old-ocpjob. It's currently 4.11 for the oldest supported release.new-ocpjob. It's currently 4.15.hypershiftjob. This job currently defers "which version?" to the workflow, because we do not expect HyperShift-specific difference to evolve much between 4.y releases, while the APIs used by the operator (Deployments, Services, Routes, etc.) might. We could revisit this and launchold-hypershiftandnew-hypershiftflavors in the future if we see a need.Which graph-data?
published-graph-datajobs. These consume images that were built by previous postsubmits in the cincinnati-graph-data repository.local-graph-datajob. This consumes a graph-data image built from steps in the operator repository, allowing convenient testing of changes that simultaneously tweak the operator and how the graph-data image is built. For example, OTA-1014: controllers: Add metadata container and Route cincinnati-operator#176 injects an image signature into graph-data, and updates graph-data to serve it. I'm setting aGRAPH_DATAenvironment variable tolocalto allow the test suite to easily distinguish this case.