-
Notifications
You must be signed in to change notification settings - Fork 434
HOSTEDCP-1419: Always include AWS default security group in worker security groups #3527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@csrwng: This pull request references HOSTEDCP-1419 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.16.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Skipping CI for Draft Pull Request. |
✅ Deploy Preview for hypershift-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
|
/test verify |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: csrwng The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Before this commit, the default security group was only used if no security groups were specified on a NodePool. After this change, the default security group will always be used on all workers. Any security groups defined in the NodePool will be added to the default. Security group creation has been removed from the create infra CLI, given that it's a duplicate of the default one created by the control plane operator.
493b1c8 to
1ae3a36
Compare
|
/lgtm |
|
/retest |
|
/retest-required |
|
@csrwng: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/hold Revision 1ae3a36 was retested 3 times: holding |
|
/override ci/prow/e2e-kubevirt-aws-ovn |
|
@csrwng: Overrode contexts on behalf of csrwng: ci/prow/e2e-kubevirt-aws-ovn DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/hold cancel |
|
[ART PR BUILD NOTIFIER] This PR has been included in build ose-hypershift-container-v4.16.0-202402090109.p0.g77ffd43.assembly.stream.el9 for distgit hypershift. |
[1] includes [2] pitching local update services in managed regions. However, HyperShift currently forces ClusterVersion spec.upstream empty [3]. This commit is part of making the upstream update service configurable on HyperShift, so those clusters can hear about and assess known issues with updates in environments where the default https://api.openshift.com/api/upgrades_info update service is not accessible, or in which an alternative update service was desired for testing or policy reasons. This includes [2], mentioned above, but would also include any other instance of disconnected/restricted-network use. The alternative for folks who want HyperShift updates in places where api.openshift.com is inaccessible are: * Don't use an update service and manage that aspect manually. But the update service declares multiple new releases each week, as well as delivering information about known risks/issues for local clusters to evalute their exposure. That's a lot of information to manage manually, if folks decide not to plug into the existing update service tooling. * Run a local update service, and fiddle with DNS and X.509 certs so that packets aimed at api.openshift.com get routed to your local service . This one requires less long-term effort than manually replacing the entire update service system, but it requires your clusters to trust an X.509 Certificate Authority that is willing to sign certificates for your local service saying "yup, that one is definitely api.openshift.com". The implementation is similar to e438076 (api/v1beta1/hostedcluster_types: Add channel, availableUpdates, and conditionalUpdates, 2022-12-14, openshift#1954), where I added update-related status properties piped up via CVO -> ClusterVersion.status -> HostedControlPlane.status -> HostedCluster.status. This commit adds a new spec property that is piped down via: 1. HostedCluster (management cluster) spec.upstream. 2. HostedControlPlane (management cluster) spec.upstream. 3. Cluster-version operator Deployment --upstream command line option (management cluster), and also ClusterVersion spec.upstream. 4. Cluster-version operator container (managment cluster). The CVO's new --upstream option is from [3], and we're using that pathway instead of ClusterVersion spec, because we don't want the update service URI to come via the the customer-accessible hosted API. It's ok for the channel (which is used as a query parameter in the update-service GET requests) to continue to flow in via ClusterVersion spec. I'm still populating ClusterVersion's spec.upstream in this commit for discoverability, although because --upstream is being used, the CVO will ignore the ClusterVersion spec.upstream value. When the HyperShift operator sets this in HostedControlPlane for an older cluster version, the old HostedControlPlane controller (launched from the hosted release payload) will not recognize or propagate the new property. But that's not a terrible thing, and issues with fetching update recommendations from the default update service will still be reported in the RetrievedUpdates condition [4] with a message mentioning the update service URI if there are problems [5]. Although it doesn't look like we capture that condition currently: $ grep -r 'ConditionType = "ClusterVersion' api/hypershift/v1beta1/ api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionSucceeding ConditionType = "ClusterVersionSucceeding" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionUpgradeable ConditionType = "ClusterVersionUpgradeable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionFailing ConditionType = "ClusterVersionFailing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionProgressing ConditionType = "ClusterVersionProgressing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionAvailable ConditionType = "ClusterVersionAvailable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionReleaseAccepted ConditionType = "ClusterVersionReleaseAccepted" We could opt to collect it in follow-up work if issues occur frequently enough that dropping down to the ClusterVersion level to debug becomes tedious. If, in the future, we grow an option to give the hosted CVO kubeconfig access to both the management and hosted Kubernetes APIs, we could drop --upstream and have the hosted CVO reach out and read this configuration off HostedControlPlane or HostedCluster directly. I'm bumping v1alpha1 as described in 4af5ffa (api/v1alpha1: Catch up with channel, availableUpdates, and conditionalUpdates, 2023-01-17, default security group in worker security groups, 2024-02-05, openshift#3527) suggests that the "v1alpha1 is a superset of v1beta1" policy remains current. [1]: https://issues.redhat.com/browse/XCMSTRAT-513 [2]: https://issues.redhat.com/browse/OTA-1185 [3]: openshift/cluster-version-operator#1035 [4]: https://github.com/openshift/api/blob/54b3334bfac52883d515c11118ca191bffba5db7/config/v1/types_cluster_version.go#L702-L706 [5]: https://github.com/openshift/cluster-version-operator/blob/ce6169c7b9b0d44c2e41342e6414ed9db0a31a63/pkg/cvo/availableupdates.go#L356-L365
[1] includes [2] pitching local update services in managed regions. However, HyperShift had (until this commit) been forcing ClusterVersion spec.upstream empty. This commit is part of making the upstream update service configurable on HyperShift, so those clusters can hear about and assess known issues with updates in environments where the default https://api.openshift.com/api/upgrades_info update service is not accessible, or in which an alternative update service was desired for testing or policy reasons. This includes [2], mentioned above, but would also include any other instance of disconnected/restricted-network use. The alternative for folks who want HyperShift updates in places where api.openshift.com is inaccessible are: * Don't use an update service and manage that aspect manually. But the update service declares multiple new releases each week, as well as delivering information about known risks/issues for local clusters to evalute their exposure. That's a lot of information to manage manually, if folks decide not to plug into the existing update service tooling. * Run a local update service, and fiddle with DNS and X.509 certs so that packets aimed at api.openshift.com get routed to your local service . This one requires less long-term effort than manually replacing the entire update service system, but it requires your clusters to trust an X.509 Certificate Authority that is willing to sign certificates for your local service saying "yup, that one is definitely api.openshift.com". The implementation is similar to e438076 (api/v1beta1/hostedcluster_types: Add channel, availableUpdates, and conditionalUpdates, 2022-12-14, openshift#1954), where I added update-related status properties piped up via CVO -> ClusterVersion.status -> HostedControlPlane.status -> HostedCluster.status. This commit adds a new spec property that is piped down via: 1. HostedCluster (management cluster) spec.upstream. 2. HostedControlPlane (management cluster) spec.upstream. 3. Cluster-version operator Deployment --upstream command line option (management cluster), and also ClusterVersion spec.upstream. 4. Cluster-version operator container (managment cluster). The CVO's new --upstream option is from [3], and we're using that pathway instead of ClusterVersion spec, because we don't want the update service URI to come via the the customer-accessible hosted API. It's ok for the channel (which is used as a query parameter in the update-service GET requests) to continue to flow in via ClusterVersion spec. I'm still populating ClusterVersion's spec.upstream in this commit for discoverability, although because --upstream is being used, the CVO will ignore the ClusterVersion spec.upstream value. When the HyperShift operator sets this in HostedControlPlane for an older cluster version, the old HostedControlPlane controller (launched from the hosted release payload) will not recognize or propagate the new property. But that's not a terrible thing, and issues with fetching update recommendations from the default update service will still be reported in the RetrievedUpdates condition [4] with a message mentioning the update service URI if there are problems [5]. Although it doesn't look like we capture that condition currently: $ grep -r 'ConditionType = "ClusterVersion' api/hypershift/v1beta1/ api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionSucceeding ConditionType = "ClusterVersionSucceeding" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionUpgradeable ConditionType = "ClusterVersionUpgradeable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionFailing ConditionType = "ClusterVersionFailing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionProgressing ConditionType = "ClusterVersionProgressing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionAvailable ConditionType = "ClusterVersionAvailable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionReleaseAccepted ConditionType = "ClusterVersionReleaseAccepted" We could opt to collect it in follow-up work if issues occur frequently enough that dropping down to the ClusterVersion level to debug becomes tedious. If, in the future, we grow an option to give the hosted CVO kubeconfig access to both the management and hosted Kubernetes APIs, we could drop --upstream and have the hosted CVO reach out and read this configuration off HostedControlPlane or HostedCluster directly. I'm bumping v1alpha1 as described in 4af5ffa (api/v1alpha1: Catch up with channel, availableUpdates, and conditionalUpdates, 2023-01-17, default security group in worker security groups, 2024-02-05, openshift#3527) suggests that the "v1alpha1 is a superset of v1beta1" policy remains current. [1]: https://issues.redhat.com/browse/XCMSTRAT-513 [2]: https://issues.redhat.com/browse/OTA-1185 [3]: openshift/cluster-version-operator#1035 [4]: https://github.com/openshift/api/blob/54b3334bfac52883d515c11118ca191bffba5db7/config/v1/types_cluster_version.go#L702-L706 [5]: https://github.com/openshift/cluster-version-operator/blob/ce6169c7b9b0d44c2e41342e6414ed9db0a31a63/pkg/cvo/availableupdates.go#L356-L365
[1] includes [2] pitching local update services in managed regions. However, HyperShift had (until this commit) been forcing ClusterVersion spec.upstream empty. This commit is part of making the upstream update service configurable on HyperShift, so those clusters can hear about and assess known issues with updates in environments where the default https://api.openshift.com/api/upgrades_info update service is not accessible, or in which an alternative update service was desired for testing or policy reasons. This includes [2], mentioned above, but would also include any other instance of disconnected/restricted-network use. The alternative for folks who want HyperShift updates in places where api.openshift.com is inaccessible are: * Don't use an update service and manage that aspect manually. But the update service declares multiple new releases each week, as well as delivering information about known risks/issues for local clusters to evalute their exposure. That's a lot of information to manage manually, if folks decide not to plug into the existing update service tooling. * Run a local update service, and fiddle with DNS and X.509 certs so that packets aimed at api.openshift.com get routed to your local service . This one requires less long-term effort than manually replacing the entire update service system, but it requires your clusters to trust an X.509 Certificate Authority that is willing to sign certificates for your local service saying "yup, that one is definitely api.openshift.com". The implementation is similar to e438076 (api/v1beta1/hostedcluster_types: Add channel, availableUpdates, and conditionalUpdates, 2022-12-14, openshift#1954), where I added update-related status properties piped up via CVO -> ClusterVersion.status -> HostedControlPlane.status -> HostedCluster.status. This commit adds a new spec property that is piped down via: 1. HostedCluster (management cluster) spec.upstream. 2. HostedControlPlane (management cluster) spec.upstream. 3. Cluster-version operator Deployment --upstream command line option (management cluster), and also ClusterVersion spec.upstream. 4. Cluster-version operator container (managment cluster). The CVO's new --upstream option is from [3], and we're using that pathway instead of ClusterVersion spec, because we don't want the update service URI to come via the the customer-accessible hosted API. It's ok for the channel (which is used as a query parameter in the update-service GET requests) to continue to flow in via ClusterVersion spec. I'm still populating ClusterVersion's spec.upstream in this commit for discoverability, although because --upstream is being used, the CVO will ignore the ClusterVersion spec.upstream value. When the HyperShift operator sets this in HostedControlPlane for an older cluster version, the old HostedControlPlane controller (launched from the hosted release payload) will not recognize or propagate the new property. But that's not a terrible thing, and issues with fetching update recommendations from the default update service will still be reported in the RetrievedUpdates condition [4] with a message mentioning the update service URI if there are problems [5]. Although it doesn't look like we capture that condition currently: $ grep -r 'ConditionType = "ClusterVersion' api/hypershift/v1beta1/ api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionSucceeding ConditionType = "ClusterVersionSucceeding" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionUpgradeable ConditionType = "ClusterVersionUpgradeable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionFailing ConditionType = "ClusterVersionFailing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionProgressing ConditionType = "ClusterVersionProgressing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionAvailable ConditionType = "ClusterVersionAvailable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionReleaseAccepted ConditionType = "ClusterVersionReleaseAccepted" We could opt to collect it in follow-up work if issues occur frequently enough that dropping down to the ClusterVersion level to debug becomes tedious. If, in the future, we grow an option to give the hosted CVO kubeconfig access to both the management and hosted Kubernetes APIs, we could drop --upstream and have the hosted CVO reach out and read this configuration off HostedControlPlane or HostedCluster directly. I'm bumping v1alpha1 as described in 4af5ffa (api/v1alpha1: Catch up with channel, availableUpdates, and conditionalUpdates, 2023-01-17, openshift#1954). The recent 1ae3a36 (Always include AWS default security group in worker security groups, 2024-02-05, openshift#3527) suggests that the "v1alpha1 is a superset of v1beta1" policy remains current. [1]: https://issues.redhat.com/browse/XCMSTRAT-513 [2]: https://issues.redhat.com/browse/OTA-1185 [3]: openshift/cluster-version-operator#1035 [4]: https://github.com/openshift/api/blob/54b3334bfac52883d515c11118ca191bffba5db7/config/v1/types_cluster_version.go#L702-L706 [5]: https://github.com/openshift/cluster-version-operator/blob/ce6169c7b9b0d44c2e41342e6414ed9db0a31a63/pkg/cvo/availableupdates.go#L356-L365
[1] includes [2] pitching local update services in managed regions. However, HyperShift had (until this commit) been forcing ClusterVersion spec.upstream empty. This commit is part of making the upstream update service configurable on HyperShift, so those clusters can hear about and assess known issues with updates in environments where the default https://api.openshift.com/api/upgrades_info update service is not accessible, or in which an alternative update service was desired for testing or policy reasons. This includes [2], mentioned above, but would also include any other instance of disconnected/restricted-network use. The alternative for folks who want HyperShift updates in places where api.openshift.com is inaccessible are: * Don't use an update service and manage that aspect manually. But the update service declares multiple new releases each week, as well as delivering information about known risks/issues for local clusters to evalute their exposure. That's a lot of information to manage manually, if folks decide not to plug into the existing update service tooling. * Run a local update service, and fiddle with DNS and X.509 certs so that packets aimed at api.openshift.com get routed to your local service . This one requires less long-term effort than manually replacing the entire update service system, but it requires your clusters to trust an X.509 Certificate Authority that is willing to sign certificates for your local service saying "yup, that one is definitely api.openshift.com". The implementation is similar to e438076 (api/v1beta1/hostedcluster_types: Add channel, availableUpdates, and conditionalUpdates, 2022-12-14, openshift#1954), where I added update-related status properties piped up via CVO -> ClusterVersion.status -> HostedControlPlane.status -> HostedCluster.status. This commit adds a new spec property that is piped down via: 1. HostedCluster (management cluster) spec.upstream. 2. HostedControlPlane (management cluster) spec.upstream. 3. Cluster-version operator Deployment --upstream command line option (management cluster), and also ClusterVersion spec.upstream. 4. Cluster-version operator container (managment cluster). The CVO's new --upstream option is from [3], and we're using that pathway instead of ClusterVersion spec, because we don't want the update service URI to come via the the customer-accessible hosted API. It's ok for the channel (which is used as a query parameter in the update-service GET requests) to continue to flow in via ClusterVersion spec. I'm still populating ClusterVersion's spec.upstream in this commit for discoverability, although because --upstream is being used, the CVO will ignore the ClusterVersion spec.upstream value. When the HyperShift operator sets this in HostedControlPlane for an older cluster version, the old HostedControlPlane controller (launched from the hosted release payload) will not recognize or propagate the new property. But that's not a terrible thing, and issues with fetching update recommendations from the default update service will still be reported in the RetrievedUpdates condition [4] with a message mentioning the update service URI if there are problems [5]. Although it doesn't look like we capture that condition currently: $ grep -r 'ConditionType = "ClusterVersion' api/hypershift/v1beta1/ api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionSucceeding ConditionType = "ClusterVersionSucceeding" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionUpgradeable ConditionType = "ClusterVersionUpgradeable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionFailing ConditionType = "ClusterVersionFailing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionProgressing ConditionType = "ClusterVersionProgressing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionAvailable ConditionType = "ClusterVersionAvailable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionReleaseAccepted ConditionType = "ClusterVersionReleaseAccepted" We could opt to collect it in follow-up work if issues occur frequently enough that dropping down to the ClusterVersion level to debug becomes tedious. If, in the future, we grow an option to give the hosted CVO kubeconfig access to both the management and hosted Kubernetes APIs, we could drop --upstream and have the hosted CVO reach out and read this configuration off HostedControlPlane or HostedCluster directly. I'm bumping v1alpha1 as described in 4af5ffa (api/v1alpha1: Catch up with channel, availableUpdates, and conditionalUpdates, 2023-01-17, openshift#1954). The recent 1ae3a36 (Always include AWS default security group in worker security groups, 2024-02-05, openshift#3527) suggests that the "v1alpha1 is a superset of v1beta1" policy remains current. [1]: https://issues.redhat.com/browse/XCMSTRAT-513 [2]: https://issues.redhat.com/browse/OTA-1185 [3]: openshift/cluster-version-operator#1035 [4]: https://github.com/openshift/api/blob/54b3334bfac52883d515c11118ca191bffba5db7/config/v1/types_cluster_version.go#L702-L706 [5]: https://github.com/openshift/cluster-version-operator/blob/ce6169c7b9b0d44c2e41342e6414ed9db0a31a63/pkg/cvo/availableupdates.go#L356-L365
[1] includes [2] pitching local update services in managed regions. However, HyperShift had (until this commit) been forcing ClusterVersion spec.upstream empty. This commit is part of making the upstream update service configurable on HyperShift, so those clusters can hear about and assess known issues with updates in environments where the default https://api.openshift.com/api/upgrades_info update service is not accessible, or in which an alternative update service was desired for testing or policy reasons. This includes [2], mentioned above, but would also include any other instance of disconnected/restricted-network use. The alternative for folks who want HyperShift updates in places where api.openshift.com is inaccessible are: * Don't use an update service and manage that aspect manually. But the update service declares multiple new releases each week, as well as delivering information about known risks/issues for local clusters to evalute their exposure. That's a lot of information to manage manually, if folks decide not to plug into the existing update service tooling. * Run a local update service, and fiddle with DNS and X.509 certs so that packets aimed at api.openshift.com get routed to your local service . This one requires less long-term effort than manually replacing the entire update service system, but it requires your clusters to trust an X.509 Certificate Authority that is willing to sign certificates for your local service saying "yup, that one is definitely api.openshift.com". The implementation is similar to e438076 (api/v1beta1/hostedcluster_types: Add channel, availableUpdates, and conditionalUpdates, 2022-12-14, openshift#1954), where I added update-related status properties piped up via CVO -> ClusterVersion.status -> HostedControlPlane.status -> HostedCluster.status. This commit adds a new spec property that is piped down via: 1. HostedCluster (management cluster) spec.upstream. 2. HostedControlPlane (management cluster) spec.upstream. 3. Cluster-version operator Deployment --upstream command line option (management cluster), and also ClusterVersion spec.upstream. 4. Cluster-version operator container (managment cluster). The CVO's new --upstream option is from [3], and we're using that pathway instead of ClusterVersion spec, because we don't want the update service URI to come via the the customer-accessible hosted API. It's ok for the channel (which is used as a query parameter in the update-service GET requests) to continue to flow in via ClusterVersion spec. I'm still populating ClusterVersion's spec.upstream in this commit for discoverability, although because --upstream is being used, the CVO will ignore the ClusterVersion spec.upstream value. When the HyperShift operator sets this in HostedControlPlane for an older cluster version, the old HostedControlPlane controller (launched from the hosted release payload) will not recognize or propagate the new property. But that's not a terrible thing, and issues with fetching update recommendations from the default update service will still be reported in the RetrievedUpdates condition [4] with a message mentioning the update service URI if there are problems [5]. Although it doesn't look like we capture that condition currently: $ grep -r 'ConditionType = "ClusterVersion' api/hypershift/v1beta1/ api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionSucceeding ConditionType = "ClusterVersionSucceeding" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionUpgradeable ConditionType = "ClusterVersionUpgradeable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionFailing ConditionType = "ClusterVersionFailing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionProgressing ConditionType = "ClusterVersionProgressing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionAvailable ConditionType = "ClusterVersionAvailable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionReleaseAccepted ConditionType = "ClusterVersionReleaseAccepted" We could opt to collect it in follow-up work if issues occur frequently enough that dropping down to the ClusterVersion level to debug becomes tedious. If, in the future, we grow an option to give the hosted CVO kubeconfig access to both the management and hosted Kubernetes APIs, we could drop --upstream and have the hosted CVO reach out and read this configuration off HostedControlPlane or HostedCluster directly. I'm bumping v1alpha1 as described in 4af5ffa (api/v1alpha1: Catch up with channel, availableUpdates, and conditionalUpdates, 2023-01-17, openshift#1954). The recent 1ae3a36 (Always include AWS default security group in worker security groups, 2024-02-05, openshift#3527) suggests that the "v1alpha1 is a superset of v1beta1" policy remains current. The cmd/install/assets, docs/content/reference/api.md, and vendor changes are automatic via: $ make update [1]: https://issues.redhat.com/browse/XCMSTRAT-513 [2]: https://issues.redhat.com/browse/OTA-1185 [3]: openshift/cluster-version-operator#1035 [4]: https://github.com/openshift/api/blob/54b3334bfac52883d515c11118ca191bffba5db7/config/v1/types_cluster_version.go#L702-L706 [5]: https://github.com/openshift/cluster-version-operator/blob/ce6169c7b9b0d44c2e41342e6414ed9db0a31a63/pkg/cvo/availableupdates.go#L356-L365
[1] includes [2] pitching local update services in managed regions. However, HyperShift had (until this commit) been forcing ClusterVersion spec.upstream empty. This commit is part of making the upstream update service configurable on HyperShift, so those clusters can hear about and assess known issues with updates in environments where the default https://api.openshift.com/api/upgrades_info update service is not accessible, or in which an alternative update service was desired for testing or policy reasons. This includes [2], mentioned above, but would also include any other instance of disconnected/restricted-network use. The alternative for folks who want HyperShift updates in places where api.openshift.com is inaccessible are: * Don't use an update service and manage that aspect manually. But the update service declares multiple new releases each week, as well as delivering information about known risks/issues for local clusters to evalute their exposure. That's a lot of information to manage manually, if folks decide not to plug into the existing update service tooling. * Run a local update service, and fiddle with DNS and X.509 certs so that packets aimed at api.openshift.com get routed to your local service . This one requires less long-term effort than manually replacing the entire update service system, but it requires your clusters to trust an X.509 Certificate Authority that is willing to sign certificates for your local service saying "yup, that one is definitely api.openshift.com". The implementation is similar to e438076 (api/v1beta1/hostedcluster_types: Add channel, availableUpdates, and conditionalUpdates, 2022-12-14, openshift#1954), where I added update-related status properties piped up via CVO -> ClusterVersion.status -> HostedControlPlane.status -> HostedCluster.status. This commit adds a new spec property that is piped down via: 1. HostedCluster (management cluster) spec.upstream. 2. HostedControlPlane (management cluster) spec.upstream. 3. Cluster-version operator Deployment --upstream command line option (management cluster), and also ClusterVersion spec.upstream. 4. Cluster-version operator container (managment cluster). The CVO's new --upstream option is from [3], and we're using that pathway instead of ClusterVersion spec, because we don't want the update service URI to come via the the customer-accessible hosted API. It's ok for the channel (which is used as a query parameter in the update-service GET requests) to continue to flow in via ClusterVersion spec. I'm still populating ClusterVersion's spec.upstream in this commit for discoverability, although because --upstream is being used, the CVO will ignore the ClusterVersion spec.upstream value. When the HyperShift operator sets this in HostedControlPlane for an older cluster version, the old HostedControlPlane controller (launched from the hosted release payload) will not recognize or propagate the new property. But that's not a terrible thing, and issues with fetching update recommendations from the default update service will still be reported in the RetrievedUpdates condition [4] with a message mentioning the update service URI if there are problems [5]. Although it doesn't look like we capture that condition currently: $ grep -r 'ConditionType = "ClusterVersion' api/hypershift/v1beta1/ api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionSucceeding ConditionType = "ClusterVersionSucceeding" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionUpgradeable ConditionType = "ClusterVersionUpgradeable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionFailing ConditionType = "ClusterVersionFailing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionProgressing ConditionType = "ClusterVersionProgressing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionAvailable ConditionType = "ClusterVersionAvailable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionReleaseAccepted ConditionType = "ClusterVersionReleaseAccepted" We could opt to collect it in follow-up work if issues occur frequently enough that dropping down to the ClusterVersion level to debug becomes tedious. If, in the future, we grow an option to give the hosted CVO kubeconfig access to both the management and hosted Kubernetes APIs, we could drop --upstream and have the hosted CVO reach out and read this configuration off HostedControlPlane or HostedCluster directly. I'm bumping v1alpha1 as described in 4af5ffa (api/v1alpha1: Catch up with channel, availableUpdates, and conditionalUpdates, 2023-01-17, openshift#1954). The recent 1ae3a36 (Always include AWS default security group in worker security groups, 2024-02-05, openshift#3527) suggests that the "v1alpha1 is a superset of v1beta1" policy remains current. The cmd/install/assets, client/applyconfiguration, docs/content/reference/api.md, hack/app-sre/saas_template.yaml, and vendor changes are automatic via: $ make update [1]: https://issues.redhat.com/browse/XCMSTRAT-513 [2]: https://issues.redhat.com/browse/OTA-1185 [3]: openshift/cluster-version-operator#1035 [4]: https://github.com/openshift/api/blob/54b3334bfac52883d515c11118ca191bffba5db7/config/v1/types_cluster_version.go#L702-L706 [5]: https://github.com/openshift/cluster-version-operator/blob/ce6169c7b9b0d44c2e41342e6414ed9db0a31a63/pkg/cvo/availableupdates.go#L356-L365
[1] includes [2] pitching local update services in managed regions. However, HyperShift had (until this commit) been forcing ClusterVersion spec.upstream empty. This commit is part of making the upstream update service configurable on HyperShift, so those clusters can hear about and assess known issues with updates in environments where the default https://api.openshift.com/api/upgrades_info update service is not accessible, or in which an alternative update service was desired for testing or policy reasons. This includes [2], mentioned above, but would also include any other instance of disconnected/restricted-network use. The alternative for folks who want HyperShift updates in places where api.openshift.com is inaccessible are: * Don't use an update service and manage that aspect manually. But the update service declares multiple new releases each week, as well as delivering information about known risks/issues for local clusters to evalute their exposure. That's a lot of information to manage manually, if folks decide not to plug into the existing update service tooling. * Run a local update service, and fiddle with DNS and X.509 certs so that packets aimed at api.openshift.com get routed to your local service . This one requires less long-term effort than manually replacing the entire update service system, but it requires your clusters to trust an X.509 Certificate Authority that is willing to sign certificates for your local service saying "yup, that one is definitely api.openshift.com". The implementation is similar to e438076 (api/v1beta1/hostedcluster_types: Add channel, availableUpdates, and conditionalUpdates, 2022-12-14, openshift#1954), where I added update-related status properties piped up via CVO -> ClusterVersion.status -> HostedControlPlane.status -> HostedCluster.status. This commit adds a new spec property that is piped down via: 1. HostedCluster (management cluster) spec.updateService. 2. HostedControlPlane (management cluster) spec.updateService. 3. Cluster-version operator Deployment --update-service command line option (management cluster), and also ClusterVersion spec.upstream. 4. Cluster-version operator container (managment cluster). The CVO's new --update-service option is from [3], and we're using that pathway instead of ClusterVersion spec, because we don't want the update service URI to come via the the customer-accessible hosted API. It's ok for the channel (which is used as a query parameter in the update-service GET requests) to continue to flow in via ClusterVersion spec. I'm still populating ClusterVersion's spec.upstream in this commit for discoverability, although because --update-service is being used, the CVO will ignore the ClusterVersion spec.upstream value. When the HyperShift operator sets this in HostedControlPlane for an older cluster version, the old HostedControlPlane controller (launched from the hosted release payload) will not recognize or propagate the new property. But that's not a terrible thing, and issues with fetching update recommendations from the default update service will still be reported in the RetrievedUpdates condition [4] with a message mentioning the update service URI if there are problems [5]. Although it doesn't look like we capture that condition currently: $ grep -r 'ConditionType = "ClusterVersion' api/hypershift/v1beta1/ api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionSucceeding ConditionType = "ClusterVersionSucceeding" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionUpgradeable ConditionType = "ClusterVersionUpgradeable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionFailing ConditionType = "ClusterVersionFailing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionProgressing ConditionType = "ClusterVersionProgressing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionAvailable ConditionType = "ClusterVersionAvailable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionReleaseAccepted ConditionType = "ClusterVersionReleaseAccepted" We could opt to collect it in follow-up work if issues occur frequently enough that dropping down to the ClusterVersion level to debug becomes tedious. If, in the future, we grow an option to give the hosted CVO kubeconfig access to both the management and hosted Kubernetes APIs, we could drop --update-service and have the hosted CVO reach out and read this configuration off HostedControlPlane or HostedCluster directly. I'm bumping v1alpha1 as described in 4af5ffa (api/v1alpha1: Catch up with channel, availableUpdates, and conditionalUpdates, 2023-01-17, openshift#1954). The recent 1ae3a36 (Always include AWS default security group in worker security groups, 2024-02-05, openshift#3527) suggests that the "v1alpha1 is a superset of v1beta1" policy remains current. The cmd/install/assets, client/applyconfiguration, docs/content/reference/api.md, hack/app-sre/saas_template.yaml, and vendor changes are automatic via: $ make update [1]: https://issues.redhat.com/browse/XCMSTRAT-513 [2]: https://issues.redhat.com/browse/OTA-1185 [3]: openshift/cluster-version-operator#1035 [4]: https://github.com/openshift/api/blob/54b3334bfac52883d515c11118ca191bffba5db7/config/v1/types_cluster_version.go#L702-L706 [5]: https://github.com/openshift/cluster-version-operator/blob/ce6169c7b9b0d44c2e41342e6414ed9db0a31a63/pkg/cvo/availableupdates.go#L356-L365
[1] includes [2] pitching local update services in managed regions. However, HyperShift had (until this commit) been forcing ClusterVersion spec.upstream empty. This commit is part of making the upstream update service configurable on HyperShift, so those clusters can hear about and assess known issues with updates in environments where the default https://api.openshift.com/api/upgrades_info update service is not accessible, or in which an alternative update service was desired for testing or policy reasons. This includes [2], mentioned above, but would also include any other instance of disconnected/restricted-network use. The alternative for folks who want HyperShift updates in places where api.openshift.com is inaccessible are: * Don't use an update service and manage that aspect manually. But the update service declares multiple new releases each week, as well as delivering information about known risks/issues for local clusters to evalute their exposure. That's a lot of information to manage manually, if folks decide not to plug into the existing update service tooling. * Run a local update service, and fiddle with DNS and X.509 certs so that packets aimed at api.openshift.com get routed to your local service . This one requires less long-term effort than manually replacing the entire update service system, but it requires your clusters to trust an X.509 Certificate Authority that is willing to sign certificates for your local service saying "yup, that one is definitely api.openshift.com". The implementation is similar to e438076 (api/v1beta1/hostedcluster_types: Add channel, availableUpdates, and conditionalUpdates, 2022-12-14, openshift#1954), where I added update-related status properties piped up via CVO -> ClusterVersion.status -> HostedControlPlane.status -> HostedCluster.status. This commit adds a new spec property that is piped down via: 1. HostedCluster (management cluster) spec.updateService. 2. HostedControlPlane (management cluster) spec.updateService. 3. Cluster-version operator Deployment --update-service command line option (management cluster), and also ClusterVersion spec.upstream. 4. Cluster-version operator container (managment cluster). The CVO's new --update-service option is from [3], and we're using that pathway instead of ClusterVersion spec, because we don't want the update service URI to come via the the customer-accessible hosted API. It's ok for the channel (which is used as a query parameter in the update-service GET requests) to continue to flow in via ClusterVersion spec. I'm still populating ClusterVersion's spec.upstream in this commit for discoverability, although because --update-service is being used, the CVO will ignore the ClusterVersion spec.upstream value. When the HyperShift operator sets this in HostedControlPlane for an older cluster version, the old HostedControlPlane controller (launched from the hosted release payload) will not recognize or propagate the new property. But that's not a terrible thing, and issues with fetching update recommendations from the default update service will still be reported in the RetrievedUpdates condition [4] with a message mentioning the update service URI if there are problems [5]. Although it doesn't look like we capture that condition currently: $ grep -r 'ConditionType = "ClusterVersion' api/hypershift/v1beta1/ api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionSucceeding ConditionType = "ClusterVersionSucceeding" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionUpgradeable ConditionType = "ClusterVersionUpgradeable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionFailing ConditionType = "ClusterVersionFailing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionProgressing ConditionType = "ClusterVersionProgressing" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionAvailable ConditionType = "ClusterVersionAvailable" api/hypershift/v1beta1/hostedcluster_conditions.go: ClusterVersionReleaseAccepted ConditionType = "ClusterVersionReleaseAccepted" We could opt to collect it in follow-up work if issues occur frequently enough that dropping down to the ClusterVersion level to debug becomes tedious. If, in the future, we grow an option to give the hosted CVO kubeconfig access to both the management and hosted Kubernetes APIs, we could drop --update-service and have the hosted CVO reach out and read this configuration off HostedControlPlane or HostedCluster directly. I'm bumping v1alpha1 as described in 4af5ffa (api/v1alpha1: Catch up with channel, availableUpdates, and conditionalUpdates, 2023-01-17, openshift#1954). The recent 1ae3a36 (Always include AWS default security group in worker security groups, 2024-02-05, openshift#3527) suggests that the "v1alpha1 is a superset of v1beta1" policy remains current. The cmd/install/assets, client/applyconfiguration, docs/content/reference/api.md, hack/app-sre/saas_template.yaml, and vendor changes are automatic via: $ make update [1]: https://issues.redhat.com/browse/XCMSTRAT-513 [2]: https://issues.redhat.com/browse/OTA-1185 [3]: openshift/cluster-version-operator#1035 [4]: https://github.com/openshift/api/blob/54b3334bfac52883d515c11118ca191bffba5db7/config/v1/types_cluster_version.go#L702-L706 [5]: https://github.com/openshift/cluster-version-operator/blob/ce6169c7b9b0d44c2e41342e6414ed9db0a31a63/pkg/cvo/availableupdates.go#L356-L365
What this PR does / why we need it:
Before this commit, the default security group was only used if no security groups were specified on a NodePool. After this change, the default security group will always be used on all workers. Any security groups defined in the NodePool will be added to the default.
Security group creation has been removed from the create infra CLI, given that it's a duplicate of the default one created by the control plane operator.
Which issue(s) this PR fixes (optional, use
fixes #<issue_number>(, fixes #<issue_number>, ...)format, where issue_number might be a GitHub issue, or a Jira story:Fixes #HOSTEDCP-1419
Checklist