Skip to content

Conversation

@alebedev87
Copy link
Contributor

@alebedev87 alebedev87 commented Jun 1, 2023

This EP aims at explaining the implementation details of RFE-3395. Similar tasks (to make other cluster operators optional) didn't not need a dedicated EP because they followed the procedure from the existing component selection EP. The cluster ingress operator is different from some other cluster operators in the way it's managed on the standalone OpenShift and HyperShift. The complexity of the task required a formal document which can be reviewed hence this EP.

@openshift-ci openshift-ci bot requested review from frobware and knobunc June 1, 2023 12:46
@alebedev87 alebedev87 force-pushed the optional-ingress-operator branch from 728ce54 to c9af855 Compare June 1, 2023 12:55
@alebedev87
Copy link
Contributor Author

/retitle [WIP] Make ingress operator optional

Just found out about an important HyperShift detail which is not reflected in the EP, I have to correct it. Will be back unWIPed soon.

@openshift-ci openshift-ci bot changed the title Make ingress operator optional [WIP] Make ingress operator optional Jun 1, 2023
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 1, 2023
@alebedev87 alebedev87 force-pushed the optional-ingress-operator branch from 2f151ef to 35f31d7 Compare June 1, 2023 13:47
@alebedev87 alebedev87 changed the title [WIP] Make ingress operator optional Make ingress operator optional Jun 1, 2023
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 1, 2023
@alebedev87 alebedev87 force-pushed the optional-ingress-operator branch from 35f31d7 to fa24c36 Compare June 1, 2023 14:55
@alebedev87
Copy link
Contributor Author

/assign @bparees, @Miciah

@alebedev87
Copy link
Contributor Author

/assign @bparees

@alebedev87
Copy link
Contributor Author

/assign @Miciah

@alebedev87
Copy link
Contributor Author

/assign @csrwng

TBD

### API Extensions
N/A - no api extensions are being introduced in this proposal
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This enhancement proposes adding the capabilities property to the HostedCluster (and presumably also the HostedControlPlane) spec structures, and modifications to existing CRDs are part of this section's target subject. I'm fine with "Skipping this section while we're scaffolding alternatives" or some other placeholder, and I'm not an approver, so it doesn't matter if I'm fine with something or not, but I'd have expected the capabilities additions to get discussed in more detail here.

Copy link
Contributor Author

@alebedev87 alebedev87 Jun 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The API for HostedCluster was made up just for the demonstration purpose. I'm not proposing to add it, I'm just trying to guess how it may look like. Maybe I should not add HostedCluster yaml to this EP at all, to avoid the confusion.
Sorry, I wanted to add you as approver, I just forgot, sorry. Correcting this...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the HostedCluster yaml to avoid confusion. Added wking as approver.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

@alebedev87 alebedev87 Jun 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with Trevor, it'd be good to see API details on how the new capability

The details of how the capability API can be used in the installer or CVO are described in the component selection EP. This enhancement doesn't change it in any way, just adds a new capability. How to add a new capability is detailed in the same EP.

and how we plan to expose them in Hypershift API.

That's a decision which needs to be taken by the engineers responsible for HyperShift API. Probably deserves a dedicated EP done by the HyperShift engineers.

UPD: a non goal for the design of the HyperShift API is added.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this EP doesn't need to describe the capabilities API itself or how capabilities get specified on HyperShift. However, it seems to me that the name of the capability itself is an API, for the following reasons:

  • Implementing this capability requires a new API definition in openshift/api.
  • The ingress operator uses the API definition to specify which manifests to install when the capability is enabled.
  • The cluster-admin may use the API definition to check whether the capability is enabled on a cluster.

In my opinion, it would be useful to document the new capability name here, under "API Extensions". (I would argue that any EP that defines a new capability should define that new capability under "API Extensions".)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The update of the cluster version API is added to API Extensions chapter.

The following APIs are not available if the ingress capability is disabled:
- CRDs which are part of the cluster operator's manifest payload:
- [IngressController](https://github.com/openshift/cluster-ingress-operator/blob/master/manifests/00-custom-resource-definition.yaml)
- [DNSRecord](https://github.com/openshift/cluster-ingress-operator/blob/master/manifests/00-custom-resource-definition-internal.yaml)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't there previous work for user-managed DNS? Maybe this deserves to be its own capability, also disabled on HyperShift?

Copy link
Contributor Author

@alebedev87 alebedev87 Jun 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean Configurable DNS Management for LoadBalancerService Ingress Controllers? This was an enhancement for the IngressController API to have an ability to disable the DNS management for a given ingress controller.
Overall, I agree, the DNS management stands out and seems to be artificially embedded into the ingress operator due to a lack of a better place maybe?
I would say that I'd rather prefer the ExternalDNS Operator (recently added to the OperatorHub) to take over this functionality and remove this part from the ingress operator completely rather then creating a dedicated capability for it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ingress operator manages DNS records for ingresscontrollers and gateways. The ingress operator defines and uses the DNSRecord CRD in the implementation of the operator's DNS management. This CRD is an internal implementation detail of the ingress operator and isn't supported for any other use.

It would be nice if the ingress operator used external-dns instead, but we've been saying that for quite some time: openshift/cluster-ingress-operator#59 (comment), openshift/cluster-ingress-operator#77. * grin *.

I don't understand why you would want separate capabilities for ingress and for DNS management for ingresscontrollers though.

Copy link
Contributor Author

@alebedev87 alebedev87 Jun 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why you would want separate capabilities for ingress and for DNS management for ingresscontrollers though.

I don't now if it was addressed to me or to Trevor but I try to reply anyway.
I think we all agree that the DNSRecord CRD should belong to the ingress capability and you made a good point which adds up to this agreement: "This CRD is an internal implementation detail of the ingress operator and isn't supported for any other use."


#### Console operator

To prevent the console operator from going degraded, [the console route check](https://github.com/openshift/console-operator/blob/master/pkg/console/controllers/healthcheck/controller.go#L122) should be skipped if the ingress capability is not enabled in `ClusterVersion`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it? The Route API will still exist. Perhaps the API will be delivered via another controller chain (e.g. including the AWS load-balancer operator). If folks ask for the Console component but do not provide a controller that implements the Route, that seems like it would be worth the console operator complaining about.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see any ingress controller which implements the Route API. The AWS LB controller while being the preferred solution for ROSA users doesn't work with any APIs rather that Kube's. However I agree that theoretically this can happen, for instance, we can implement the Route API support in the OpenShift fork of the AWS LB controller. Although I would doubt that this is something that we would like to add to the scope of this enhancement.
So, practically speaking, as of now, we have a choice of letting the controller operator go degraded or try to tolerate this. I think that from the point of view of the customer who requested the change, more logical would be to not see any operator going degraded when this feature is delivered.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see any ingress controller which implements the Route API.

The ingress operator requests openshift-ingress-router RBAC to monitor Routes and update their status. And checking audit logs for 4.14 CI:

$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.14-e2e-aws-sdn-serial/1661746644430884864/artifacts/e2e-aws-sdn-serial/gather-audit-logs/artifacts/audit-logs.tar | tar -xz --strip-components=2
$ zgrep -h '"resource":"routes"' kube-apiserver/*.log.gz | jq -r 'select(.verb == "update" and .objectRef.subresource == "status") | .user.username' | sort | uniq -c
     17 system:serviceaccount:openshift-ingress:router

I'm not entirely clear on where that code lives, but it seems like something that would be disabled by the Ingress component.

However I agree that theoretically this can happen, for instance, we can implement the Route API support in the OpenShift fork of the AWS LB controller.

That's one option, but I agree that it's heavy. Another option would be creating a shim Route controller that watched for Routes, translated them into Ingress requests for the AWS load balancer component to manage, and then proxied the status back from the Ingress resource to the Route resource. I have no idea how close the types are; maybe that sort of proxying is impossible, without knowledge of the specific Service being exposed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see any ingress controller which implements the Route API.

The ingress operator requests openshift-ingress-router RBAC to monitor Routes and update their status. And checking audit logs for 4.14 CI:

$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.14-e2e-aws-sdn-serial/1661746644430884864/artifacts/e2e-aws-sdn-serial/gather-audit-logs/artifacts/audit-logs.tar | tar -xz --strip-components=2
$ zgrep -h '"resource":"routes"' kube-apiserver/*.log.gz | jq -r 'select(.verb == "update" and .objectRef.subresource == "status") | .user.username' | sort | uniq -c
17 system:serviceaccount:openshift-ingress:router
I'm not entirely clear on where that code lives, but it seems like something that would be disabled by the Ingress component.

Sorry, I meant that I didn't see any ingress controller apart from the OpenShift ingress controller. Something like nginx-ingress-controller or kong-ingress-controller which could replace the ingerss controller spawned by the cluster ingress operator.

However I agree that theoretically this can happen, for instance, we can implement the Route API support in the OpenShift fork of the AWS LB controller.

That's one option, but I agree that it's heavy. Another option would be creating a shim Route controller that watched for Routes, translated them into Ingress requests for the AWS load balancer component to manage, and then proxied the status back from the Ingress resource to the Route resource. I have no idea how close the types are; maybe that sort of proxying is impossible, without knowledge of the specific Service being exposed?

A controller which translates from routes to ingresses, this goes in the opposite direction to the "ingress to route" translation we already have in place.

Copy link
Contributor

@Miciah Miciah Jun 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wking, are you proposing that enabling the "Console" component should require enabling the "Ingress" component?

Edit: I see that that option is discussed here: #1415 (comment)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wking, are you proposing that enabling the "Console" component should require enabling the "Ingress" component?

@Miciah: as of now we decided to not make that kind of dependency between the console and the ingress operators. The only change will be in the console operator to tolerate the absence of the ingress operator (Console Operator chapter).

@alebedev87 alebedev87 force-pushed the optional-ingress-operator branch from 4c63278 to 4cc8915 Compare June 2, 2023 14:32
@alebedev87 alebedev87 force-pushed the optional-ingress-operator branch from 4cc8915 to 3bf91ef Compare June 2, 2023 16:00
@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 1, 2024
@alebedev87
Copy link
Contributor Author

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 1, 2024
- Make the OpenShift installer tolerate the absence of the default ingress controller.
- Make the cluster authentication operator tolerate the absence of the default ingress controller.
- Make the cluster monitoring operator tolerate the absence of the default ingress controller.
- Disable the Route API or try to tolerate the absence of the default ingress controller.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the revised phrasing confusing too:

Tolerate the unadmitted routes for any workload except for the dependent cluster operators

Are you saying that once this EP is implemented, if a HyperShift has the ingress capability turned off, and if there are unadmitted routes, then it is fine not to "tolerate" those routes? Does that mean it is expected that creating a route could break the cluster when the ingress capability were off?

Put another way, if you made this non-goal into a goal, what would need to be changed to "tolerate" unadmitted routes?

Comment on lines 66 to 69
- As a HyperShift engineer, I want to be able to disable the ingress operator on clusters that do not need it, so that the hosted control-plane can become lighter.
- As a HyperShift engineer, I want cluster operators to tolerate the absence of the ingress operator and default router deployment, so that clusters do not report degraded status in the absence of these components.
- As a HyperShift hosted cluster's user, I want the ingress capability to be reported as a known capability, so that cluster actors (admins, users, other operators) can check whether it is enabled for the cluster.
- As a HyperShift hosted cluster's user, I want to avoid provisioning unnecessary infrastructure resources, such as an ingress ELB, when they are not needed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't know what a HyperShift engineer is. I guess you mean "Cluster Service Consumer", defined in the HyperShift concepts and personas as "The user empowered to request control planes, request workers, and drive upgrades or modify externalized configuration"? I'm not sure what "HyperShift hosted cluster's user" is either.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A "HyperShift engineer" carries the same meaning as "OpenShift engineer" in the User Stories template from the guidelines but for the HyperShift product.

"HyperShift hosted cluster's user" is indeed "Cluster Service Consumer", let me update this part.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"HyperShift hosted cluster's user" is indeed "Cluster Service Consumer", let me update this part.

Done.

A HyperShift Cluster User with a hosted cluster wants to use an AWS ALB to route the traffic to an application deployed on that cluster.

#### Terminology
[HyperShift concepts and personas](https://hypershift-docs.netlify.app/reference/concepts-and-personas/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't a permalink. Could you copy and paste the relevant definitions into the EP?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

TBD

### API Extensions
N/A - no api extensions are being introduced in this proposal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this EP doesn't need to describe the capabilities API itself or how capabilities get specified on HyperShift. However, it seems to me that the name of the capability itself is an API, for the following reasons:

  • Implementing this capability requires a new API definition in openshift/api.
  • The ingress operator uses the API definition to specify which manifests to install when the capability is enabled.
  • The cluster-admin may use the API definition to check whether the capability is enabled on a cluster.

In my opinion, it would be useful to document the new capability name here, under "API Extensions". (I would argue that any EP that defines a new capability should define that new capability under "API Extensions".)

Copy link
Contributor

@Miciah Miciah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave this another read through and have some minor comments.

- The console operator becomes degraded due to the failed admission checks of the console and download routes, see [the usage](https://github.com/search?q=repo%3Aopenshift%2Fconsole-operator%20IngressURI&type=code) of [IngressURI function](https://pkg.go.dev/github.com/openshift/library-go/pkg/route/routeapihelpers#IngressURI).
- The console route is not served by the default ingress controller.

To resolve the former problem, it is necessary to skip the route admission checks when the ingress capability is not enabled.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not just the admission check, but the connectivity check needs to be skipped as well when the capability is not enabled.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this is something we discussed with Jakub from the console team. The idea was to find all the occurrences of IngressURI function (mentioned in the sentence above) and make them tolerate the absence of the ingresscontroller. I rephrased the sentence above to mention this and removed "admission" in this sentence to mean: "all checks" or "any checks".

Comment on lines +208 to +260
#### Ingress to route controller

[The Kubernetes Ingress API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#ingress-v1-networking-k8s-io) on OpenShift is implemented through translation into routes.
[The ingress-to-route controller](https://github.com/openshift/route-controller-manager#ingress-to-route-controller) is the component responsible for this functionality. When the default ingress controller is not present, the routes are left without serving, resulting in the ingresses also being unserved. On HyperShift this can be mitigated by [any ingress controller implementation](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).
In the context of this enhancement, [AWS Load Balancer Operator](/enhancements/ingress/aws-load-balancer-operator.md) is the expected implementation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess "Add Route API to the ingress capability" does address my question, albeit obliquely:

[...] In the situation when the ingress is disabled the choice lies in between the following options:

  1. Keep the Route API and let the OpenShift components create routes which are not served by any ingress controller.
  2. Keep the Route API and let the OpenShift components use an alternative ingress which supports the Route API.
  3. Remove the Route API and leave the OpenShift components to adapt to the situation, which implies code changes to stop creating routes and to migrate to a different ingress API (e.g. Kubernetes Ingress).

I figured that "OpenShift components" referred to operators that created routes for their operands, but I understand now you also mean to include the ingress-to-route controller.

This list sets up a false dichotomy; one unstated option would be to continue to let operators create their own routes but still turn off the ingress-to-route controller.

The first option seems to be the easiest (for the implementation), [...]

I suppose this (ease of implementation) is ultimately the reason why we don't turn off the ingress-to-route controller when the ingress capability is not enabled, right?

since the first option doesn't preclude the second one, it seems to be pragmatic to pick the first option, leaving open the possibility of implementing the second option later on.

I think it's a reasonable compromise to continue having the various operators (auth, console, monitoring) creating routes. Those routes could even be useful as references when creating equivalent ingresses, should the cluster-admin choose to do so.

However, leaving the ingress-to-route controller enabled seems less useful and more likely to mislead the typical project-admin who creates an ingress, observes that OpenShift generates a route for that ingress, and could be thereby misled to expect that OpenShift accept traffic for that ingress.

I suppose there is the possibility that someone could turn off the ingress capability and then install a custom router deployment or even a third-party controller that implements the Route API, in which case the ingress-to-route controller could still be useful. That seems unlikely though.

Anyway, I still think it would make sense to add a short statement here to explain why we keep translating ingresses to routes even on a cluster that doesn't serve routes, with a link to the "Add Route API to the ingress capability" section for the lengthier explanation.

N/A - no api extensions are being introduced in this proposal

#### Support Procedures
N/A - no api extensions are being introduced in this proposal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the same vein as my previous comment, we now have one more thing to check (namely, whether the "ingress" capability is enabled) when users complain, "My route isn't working!".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines 266 to 317
- The authentication operator. It checks the availability of [the authentication server's](https://github.com/openshift/cluster-authentication-operator/blob/da39951a53ad95be28e32a48f278cc23f41e99a7/pkg/controllers/oauthendpoints/oauth_endpoints_controller.go#L30-L31)
and [console](https://github.com/openshift/cluster-authentication-operator/blob/da39951a53ad95be28e32a48f278cc23f41e99a7/pkg/controllers/configobservation/console/observe_consoleurl.go#L15) routes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there are quite some dependencies between the authentication and ingress operators. However they are not relevant for this EP, as the authentication server is not managed by the authentication operator on HyperShift. So, the authentication operator is not used on HyperShift.

This section is specifically about what would be required for standalone OpenShift though, so these details do seem relevant here. I'm fine with some hand-waving though (that is, to do W, we would need to do X, Y, and Z, and (wave hands) maybe some more things, but X, Y, and Z are reason enough not to do W).

- [The component routes API](https://docs.openshift.com/container-platform/4.13/rest_api/config_apis/ingress-config-openshift-io-v1.html#status-componentroutes). The ingress operator is [responsible for the RBAC](https://github.com/openshift/cluster-ingress-operator/blob/8f98c618c5609cf7fcb97e5c61dc1e7a30576925/pkg/operator/controller/configurable-route/controller.go#L42-L52)
which the component operators need to be able to access the custom TLS certificates from `openshift-config` namespace.

**Note**: it's difficult to anticipate the impact of the ingress operator removal on the installation process without the actual capability implementation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the key point here is that implementing the capability for HyperShift is a useful step towards implementing the capability for standalone OpenShift as well, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rephrased to

Note: it's difficult to anticipate the impact of the ingress operator removal on the installation process without the actual capability implementation. However, implementing the capability for HyperShift is a valuable step towards implementing the capability for standalone OpenShift as well.

This approach implies the implementation of the ingress capability the way it's described in [how to implement a new capability](/enhancements/installer/component-selection.md#how-to-implement-a-new-capability).
But unlike this enhancement, it doesn't have a goal to tolerate the absence of the ingress operator.
That is, anything which can break during or after the cluster installation is allowed to break.
This approach has a clear downside of the postponed failure which questions the correctness of the capability implementation from the user's point of view.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other words, we should avoid giving the user a knob that breaks the cluster, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Rephrased this one to:

This approach has a notable downside: it provides the user with a means to break the cluster, raising concerns about the correctness of the capability implementation from the user's perspective.


Due to the lack of understanding of how HyperShift uses the cluster version operator to manage the ingress operator, we thought that the capability would serve only the informative purpose for the other OpenShift components like the console operator.
That brought the idea of giving up with the capability in favor of some other cluster resource (existing or not) which could fulfill the need of informing other OpenShift components.
Potentially that could save us from the difficult design and implementation decisions which the cluster capability implies. [The hypershift chapter](#hypershift) explains in details why the capability should be preferred over any other API.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section confuses me. After reading it a few times, I think you're saying this:

  • If we added a custom API, then we would need to do these two things:
    1. Define a new "turn ingress off" API.
    2. Modify HyperShift not to install the ingress operator when that API were used.
  • However, the HyperShift team intends to add capabilities support to HyperShift, which means we only need to do these two things:
    1. Define a new ingress capability, which is easier than defining a custom API.
    2. Annotate the ingress operator's CVO manifests with the new capability, which is easier than modifying HyperShift.

Do I understand correctly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly?

Yes.

@dhellmann
Copy link
Contributor

#1555 is changing the enhancement template in a way that will cause the header check in the linter job to fail for existing PRs. If this PR is merged within the development period for 4.16 you may override the linter if the only failures are caused by issues with the headers (please make sure the markdown formatting is correct). If this PR is not merged before 4.16 development closes, please update the enhancement to conform to the new template.

@alebedev87 alebedev87 force-pushed the optional-ingress-operator branch 3 times, most recently from 20f51d0 to 73ece1f Compare February 26, 2024 17:21
@alebedev87
Copy link
Contributor Author

alebedev87 commented Feb 26, 2024

Markdown linting fixed.

@alebedev87 alebedev87 force-pushed the optional-ingress-operator branch 7 times, most recently from f4de3e3 to d4d5bfe Compare March 12, 2024 11:45
@openshift-bot
Copy link

Inactive enhancement proposals go stale after 28d of inactivity.

See https://github.com/openshift/enhancements#life-cycle for details.

Mark the proposal as fresh by commenting /remove-lifecycle stale.
Stale proposals rot after an additional 7d of inactivity and eventually close.
Exclude this proposal from closing by commenting /lifecycle frozen.

If this proposal is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 10, 2024
@Miciah
Copy link
Contributor

Miciah commented Apr 12, 2024

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2024
@Miciah
Copy link
Contributor

Miciah commented Apr 18, 2024

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 18, 2024
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 18, 2024

@alebedev87: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants