-
Notifications
You must be signed in to change notification settings - Fork 530
NE-1129: Make ingress operator optional on HyperShift #1415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NE-1129: Make ingress operator optional on HyperShift #1415
Conversation
728ce54 to
c9af855
Compare
|
/retitle [WIP] Make ingress operator optional Just found out about an important HyperShift detail which is not reflected in the EP, I have to correct it. Will be back unWIPed soon. |
2f151ef to
35f31d7
Compare
35f31d7 to
fa24c36
Compare
|
/assign @bparees |
|
/assign @Miciah |
|
/assign @csrwng |
| TBD | ||
|
|
||
| ### API Extensions | ||
| N/A - no api extensions are being introduced in this proposal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This enhancement proposes adding the capabilities property to the HostedCluster (and presumably also the HostedControlPlane) spec structures, and modifications to existing CRDs are part of this section's target subject. I'm fine with "Skipping this section while we're scaffolding alternatives" or some other placeholder, and I'm not an approver, so it doesn't matter if I'm fine with something or not, but I'd have expected the capabilities additions to get discussed in more detail here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The API for HostedCluster was made up just for the demonstration purpose. I'm not proposing to add it, I'm just trying to guess how it may look like. Maybe I should not add HostedCluster yaml to this EP at all, to avoid the confusion.
Sorry, I wanted to add you as approver, I just forgot, sorry. Correcting this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed the HostedCluster yaml to avoid confusion. Added wking as approver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with Trevor, it'd be good to see API details on how the new capability https://github.com/openshift/enhancements/blob/c5fc7f64a8fc43397c5e81d25c867038d2eabbae/enhancements/installer/component-selection.md#implementation-detailsnotesconstraints and how we plan to expose them in Hypershift API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with Trevor, it'd be good to see API details on how the new capability
The details of how the capability API can be used in the installer or CVO are described in the component selection EP. This enhancement doesn't change it in any way, just adds a new capability. How to add a new capability is detailed in the same EP.
and how we plan to expose them in Hypershift API.
That's a decision which needs to be taken by the engineers responsible for HyperShift API. Probably deserves a dedicated EP done by the HyperShift engineers.
UPD: a non goal for the design of the HyperShift API is added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this EP doesn't need to describe the capabilities API itself or how capabilities get specified on HyperShift. However, it seems to me that the name of the capability itself is an API, for the following reasons:
- Implementing this capability requires a new API definition in openshift/api.
- The ingress operator uses the API definition to specify which manifests to install when the capability is enabled.
- The cluster-admin may use the API definition to check whether the capability is enabled on a cluster.
In my opinion, it would be useful to document the new capability name here, under "API Extensions". (I would argue that any EP that defines a new capability should define that new capability under "API Extensions".)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The update of the cluster version API is added to API Extensions chapter.
| The following APIs are not available if the ingress capability is disabled: | ||
| - CRDs which are part of the cluster operator's manifest payload: | ||
| - [IngressController](https://github.com/openshift/cluster-ingress-operator/blob/master/manifests/00-custom-resource-definition.yaml) | ||
| - [DNSRecord](https://github.com/openshift/cluster-ingress-operator/blob/master/manifests/00-custom-resource-definition-internal.yaml) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wasn't there previous work for user-managed DNS? Maybe this deserves to be its own capability, also disabled on HyperShift?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean Configurable DNS Management for LoadBalancerService Ingress Controllers? This was an enhancement for the IngressController API to have an ability to disable the DNS management for a given ingress controller.
Overall, I agree, the DNS management stands out and seems to be artificially embedded into the ingress operator due to a lack of a better place maybe?
I would say that I'd rather prefer the ExternalDNS Operator (recently added to the OperatorHub) to take over this functionality and remove this part from the ingress operator completely rather then creating a dedicated capability for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ingress operator manages DNS records for ingresscontrollers and gateways. The ingress operator defines and uses the DNSRecord CRD in the implementation of the operator's DNS management. This CRD is an internal implementation detail of the ingress operator and isn't supported for any other use.
It would be nice if the ingress operator used external-dns instead, but we've been saying that for quite some time: openshift/cluster-ingress-operator#59 (comment), openshift/cluster-ingress-operator#77. * grin *.
I don't understand why you would want separate capabilities for ingress and for DNS management for ingresscontrollers though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why you would want separate capabilities for ingress and for DNS management for ingresscontrollers though.
I don't now if it was addressed to me or to Trevor but I try to reply anyway.
I think we all agree that the DNSRecord CRD should belong to the ingress capability and you made a good point which adds up to this agreement: "This CRD is an internal implementation detail of the ingress operator and isn't supported for any other use."
|
|
||
| #### Console operator | ||
|
|
||
| To prevent the console operator from going degraded, [the console route check](https://github.com/openshift/console-operator/blob/master/pkg/console/controllers/healthcheck/controller.go#L122) should be skipped if the ingress capability is not enabled in `ClusterVersion`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it? The Route API will still exist. Perhaps the API will be delivered via another controller chain (e.g. including the AWS load-balancer operator). If folks ask for the Console component but do not provide a controller that implements the Route, that seems like it would be worth the console operator complaining about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't see any ingress controller which implements the Route API. The AWS LB controller while being the preferred solution for ROSA users doesn't work with any APIs rather that Kube's. However I agree that theoretically this can happen, for instance, we can implement the Route API support in the OpenShift fork of the AWS LB controller. Although I would doubt that this is something that we would like to add to the scope of this enhancement.
So, practically speaking, as of now, we have a choice of letting the controller operator go degraded or try to tolerate this. I think that from the point of view of the customer who requested the change, more logical would be to not see any operator going degraded when this feature is delivered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't see any ingress controller which implements the Route API.
The ingress operator requests openshift-ingress-router RBAC to monitor Routes and update their status. And checking audit logs for 4.14 CI:
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.14-e2e-aws-sdn-serial/1661746644430884864/artifacts/e2e-aws-sdn-serial/gather-audit-logs/artifacts/audit-logs.tar | tar -xz --strip-components=2
$ zgrep -h '"resource":"routes"' kube-apiserver/*.log.gz | jq -r 'select(.verb == "update" and .objectRef.subresource == "status") | .user.username' | sort | uniq -c
17 system:serviceaccount:openshift-ingress:routerI'm not entirely clear on where that code lives, but it seems like something that would be disabled by the Ingress component.
However I agree that theoretically this can happen, for instance, we can implement the Route API support in the OpenShift fork of the AWS LB controller.
That's one option, but I agree that it's heavy. Another option would be creating a shim Route controller that watched for Routes, translated them into Ingress requests for the AWS load balancer component to manage, and then proxied the status back from the Ingress resource to the Route resource. I have no idea how close the types are; maybe that sort of proxying is impossible, without knowledge of the specific Service being exposed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't see any ingress controller which implements the Route API.
The ingress operator requests openshift-ingress-router RBAC to monitor Routes and update their status. And checking audit logs for 4.14 CI:
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.14-e2e-aws-sdn-serial/1661746644430884864/artifacts/e2e-aws-sdn-serial/gather-audit-logs/artifacts/audit-logs.tar | tar -xz --strip-components=2
$ zgrep -h '"resource":"routes"' kube-apiserver/*.log.gz | jq -r 'select(.verb == "update" and .objectRef.subresource == "status") | .user.username' | sort | uniq -c
17 system:serviceaccount:openshift-ingress:router
I'm not entirely clear on where that code lives, but it seems like something that would be disabled by the Ingress component.
Sorry, I meant that I didn't see any ingress controller apart from the OpenShift ingress controller. Something like nginx-ingress-controller or kong-ingress-controller which could replace the ingerss controller spawned by the cluster ingress operator.
However I agree that theoretically this can happen, for instance, we can implement the Route API support in the OpenShift fork of the AWS LB controller.
That's one option, but I agree that it's heavy. Another option would be creating a shim Route controller that watched for Routes, translated them into Ingress requests for the AWS load balancer component to manage, and then proxied the status back from the Ingress resource to the Route resource. I have no idea how close the types are; maybe that sort of proxying is impossible, without knowledge of the specific Service being exposed?
A controller which translates from routes to ingresses, this goes in the opposite direction to the "ingress to route" translation we already have in place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wking, are you proposing that enabling the "Console" component should require enabling the "Ingress" component?
Edit: I see that that option is discussed here: #1415 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wking, are you proposing that enabling the "Console" component should require enabling the "Ingress" component?
@Miciah: as of now we decided to not make that kind of dependency between the console and the ingress operators. The only change will be in the console operator to tolerate the absence of the ingress operator (Console Operator chapter).
4c63278 to
4cc8915
Compare
4cc8915 to
3bf91ef
Compare
|
/remove-lifecycle stale |
| - Make the OpenShift installer tolerate the absence of the default ingress controller. | ||
| - Make the cluster authentication operator tolerate the absence of the default ingress controller. | ||
| - Make the cluster monitoring operator tolerate the absence of the default ingress controller. | ||
| - Disable the Route API or try to tolerate the absence of the default ingress controller. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the revised phrasing confusing too:
Tolerate the unadmitted routes for any workload except for the dependent cluster operators
Are you saying that once this EP is implemented, if a HyperShift has the ingress capability turned off, and if there are unadmitted routes, then it is fine not to "tolerate" those routes? Does that mean it is expected that creating a route could break the cluster when the ingress capability were off?
Put another way, if you made this non-goal into a goal, what would need to be changed to "tolerate" unadmitted routes?
| - As a HyperShift engineer, I want to be able to disable the ingress operator on clusters that do not need it, so that the hosted control-plane can become lighter. | ||
| - As a HyperShift engineer, I want cluster operators to tolerate the absence of the ingress operator and default router deployment, so that clusters do not report degraded status in the absence of these components. | ||
| - As a HyperShift hosted cluster's user, I want the ingress capability to be reported as a known capability, so that cluster actors (admins, users, other operators) can check whether it is enabled for the cluster. | ||
| - As a HyperShift hosted cluster's user, I want to avoid provisioning unnecessary infrastructure resources, such as an ingress ELB, when they are not needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't know what a HyperShift engineer is. I guess you mean "Cluster Service Consumer", defined in the HyperShift concepts and personas as "The user empowered to request control planes, request workers, and drive upgrades or modify externalized configuration"? I'm not sure what "HyperShift hosted cluster's user" is either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A "HyperShift engineer" carries the same meaning as "OpenShift engineer" in the User Stories template from the guidelines but for the HyperShift product.
"HyperShift hosted cluster's user" is indeed "Cluster Service Consumer", let me update this part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"HyperShift hosted cluster's user" is indeed "Cluster Service Consumer", let me update this part.
Done.
| A HyperShift Cluster User with a hosted cluster wants to use an AWS ALB to route the traffic to an application deployed on that cluster. | ||
|
|
||
| #### Terminology | ||
| [HyperShift concepts and personas](https://hypershift-docs.netlify.app/reference/concepts-and-personas/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't a permalink. Could you copy and paste the relevant definitions into the EP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
| TBD | ||
|
|
||
| ### API Extensions | ||
| N/A - no api extensions are being introduced in this proposal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this EP doesn't need to describe the capabilities API itself or how capabilities get specified on HyperShift. However, it seems to me that the name of the capability itself is an API, for the following reasons:
- Implementing this capability requires a new API definition in openshift/api.
- The ingress operator uses the API definition to specify which manifests to install when the capability is enabled.
- The cluster-admin may use the API definition to check whether the capability is enabled on a cluster.
In my opinion, it would be useful to document the new capability name here, under "API Extensions". (I would argue that any EP that defines a new capability should define that new capability under "API Extensions".)
Miciah
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave this another read through and have some minor comments.
| - The console operator becomes degraded due to the failed admission checks of the console and download routes, see [the usage](https://github.com/search?q=repo%3Aopenshift%2Fconsole-operator%20IngressURI&type=code) of [IngressURI function](https://pkg.go.dev/github.com/openshift/library-go/pkg/route/routeapihelpers#IngressURI). | ||
| - The console route is not served by the default ingress controller. | ||
|
|
||
| To resolve the former problem, it is necessary to skip the route admission checks when the ingress capability is not enabled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not just the admission check, but the connectivity check needs to be skipped as well when the capability is not enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, this is something we discussed with Jakub from the console team. The idea was to find all the occurrences of IngressURI function (mentioned in the sentence above) and make them tolerate the absence of the ingresscontroller. I rephrased the sentence above to mention this and removed "admission" in this sentence to mean: "all checks" or "any checks".
| #### Ingress to route controller | ||
|
|
||
| [The Kubernetes Ingress API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#ingress-v1-networking-k8s-io) on OpenShift is implemented through translation into routes. | ||
| [The ingress-to-route controller](https://github.com/openshift/route-controller-manager#ingress-to-route-controller) is the component responsible for this functionality. When the default ingress controller is not present, the routes are left without serving, resulting in the ingresses also being unserved. On HyperShift this can be mitigated by [any ingress controller implementation](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/). | ||
| In the context of this enhancement, [AWS Load Balancer Operator](/enhancements/ingress/aws-load-balancer-operator.md) is the expected implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess "Add Route API to the ingress capability" does address my question, albeit obliquely:
[...] In the situation when the ingress is disabled the choice lies in between the following options:
- Keep the Route API and let the OpenShift components create routes which are not served by any ingress controller.
- Keep the Route API and let the OpenShift components use an alternative ingress which supports the Route API.
- Remove the Route API and leave the OpenShift components to adapt to the situation, which implies code changes to stop creating routes and to migrate to a different ingress API (e.g. Kubernetes Ingress).
I figured that "OpenShift components" referred to operators that created routes for their operands, but I understand now you also mean to include the ingress-to-route controller.
This list sets up a false dichotomy; one unstated option would be to continue to let operators create their own routes but still turn off the ingress-to-route controller.
The first option seems to be the easiest (for the implementation), [...]
I suppose this (ease of implementation) is ultimately the reason why we don't turn off the ingress-to-route controller when the ingress capability is not enabled, right?
since the first option doesn't preclude the second one, it seems to be pragmatic to pick the first option, leaving open the possibility of implementing the second option later on.
I think it's a reasonable compromise to continue having the various operators (auth, console, monitoring) creating routes. Those routes could even be useful as references when creating equivalent ingresses, should the cluster-admin choose to do so.
However, leaving the ingress-to-route controller enabled seems less useful and more likely to mislead the typical project-admin who creates an ingress, observes that OpenShift generates a route for that ingress, and could be thereby misled to expect that OpenShift accept traffic for that ingress.
I suppose there is the possibility that someone could turn off the ingress capability and then install a custom router deployment or even a third-party controller that implements the Route API, in which case the ingress-to-route controller could still be useful. That seems unlikely though.
Anyway, I still think it would make sense to add a short statement here to explain why we keep translating ingresses to routes even on a cluster that doesn't serve routes, with a link to the "Add Route API to the ingress capability" section for the lengthier explanation.
| N/A - no api extensions are being introduced in this proposal | ||
|
|
||
| #### Support Procedures | ||
| N/A - no api extensions are being introduced in this proposal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the same vein as my previous comment, we now have one more thing to check (namely, whether the "ingress" capability is enabled) when users complain, "My route isn't working!".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - The authentication operator. It checks the availability of [the authentication server's](https://github.com/openshift/cluster-authentication-operator/blob/da39951a53ad95be28e32a48f278cc23f41e99a7/pkg/controllers/oauthendpoints/oauth_endpoints_controller.go#L30-L31) | ||
| and [console](https://github.com/openshift/cluster-authentication-operator/blob/da39951a53ad95be28e32a48f278cc23f41e99a7/pkg/controllers/configobservation/console/observe_consoleurl.go#L15) routes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, there are quite some dependencies between the authentication and ingress operators. However they are not relevant for this EP, as the authentication server is not managed by the authentication operator on HyperShift. So, the authentication operator is not used on HyperShift.
This section is specifically about what would be required for standalone OpenShift though, so these details do seem relevant here. I'm fine with some hand-waving though (that is, to do W, we would need to do X, Y, and Z, and (wave hands) maybe some more things, but X, Y, and Z are reason enough not to do W).
| - [The component routes API](https://docs.openshift.com/container-platform/4.13/rest_api/config_apis/ingress-config-openshift-io-v1.html#status-componentroutes). The ingress operator is [responsible for the RBAC](https://github.com/openshift/cluster-ingress-operator/blob/8f98c618c5609cf7fcb97e5c61dc1e7a30576925/pkg/operator/controller/configurable-route/controller.go#L42-L52) | ||
| which the component operators need to be able to access the custom TLS certificates from `openshift-config` namespace. | ||
|
|
||
| **Note**: it's difficult to anticipate the impact of the ingress operator removal on the installation process without the actual capability implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the key point here is that implementing the capability for HyperShift is a useful step towards implementing the capability for standalone OpenShift as well, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rephrased to
Note: it's difficult to anticipate the impact of the ingress operator removal on the installation process without the actual capability implementation. However, implementing the capability for HyperShift is a valuable step towards implementing the capability for standalone OpenShift as well.
| This approach implies the implementation of the ingress capability the way it's described in [how to implement a new capability](/enhancements/installer/component-selection.md#how-to-implement-a-new-capability). | ||
| But unlike this enhancement, it doesn't have a goal to tolerate the absence of the ingress operator. | ||
| That is, anything which can break during or after the cluster installation is allowed to break. | ||
| This approach has a clear downside of the postponed failure which questions the correctness of the capability implementation from the user's point of view. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other words, we should avoid giving the user a knob that breaks the cluster, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Rephrased this one to:
This approach has a notable downside: it provides the user with a means to break the cluster, raising concerns about the correctness of the capability implementation from the user's perspective.
|
|
||
| Due to the lack of understanding of how HyperShift uses the cluster version operator to manage the ingress operator, we thought that the capability would serve only the informative purpose for the other OpenShift components like the console operator. | ||
| That brought the idea of giving up with the capability in favor of some other cluster resource (existing or not) which could fulfill the need of informing other OpenShift components. | ||
| Potentially that could save us from the difficult design and implementation decisions which the cluster capability implies. [The hypershift chapter](#hypershift) explains in details why the capability should be preferred over any other API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section confuses me. After reading it a few times, I think you're saying this:
- If we added a custom API, then we would need to do these two things:
- Define a new "turn ingress off" API.
- Modify HyperShift not to install the ingress operator when that API were used.
- However, the HyperShift team intends to add capabilities support to HyperShift, which means we only need to do these two things:
- Define a new ingress capability, which is easier than defining a custom API.
- Annotate the ingress operator's CVO manifests with the new capability, which is easier than modifying HyperShift.
Do I understand correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand correctly?
Yes.
|
#1555 is changing the enhancement template in a way that will cause the header check in the linter job to fail for existing PRs. If this PR is merged within the development period for 4.16 you may override the linter if the only failures are caused by issues with the headers (please make sure the markdown formatting is correct). If this PR is not merged before 4.16 development closes, please update the enhancement to conform to the new template. |
20f51d0 to
73ece1f
Compare
|
Markdown linting fixed. |
f4de3e3 to
d4d5bfe
Compare
d4d5bfe to
f5290a9
Compare
|
Inactive enhancement proposals go stale after 28d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle stale |
|
/remove-lifecycle stale |
|
/lgtm |
|
@alebedev87: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This EP aims at explaining the implementation details of RFE-3395. Similar tasks (to make other cluster operators optional) didn't not need a dedicated EP because they followed the procedure from the existing component selection EP. The cluster ingress operator is different from some other cluster operators in the way it's managed on the standalone OpenShift and HyperShift. The complexity of the task required a formal document which can be reviewed hence this EP.