-
Notifications
You must be signed in to change notification settings - Fork 220
Add ProxyProtocol flag to enable PROXY_PROTOCOl on ingress routers #354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This is enabled by default when the platform is AWS, but there is no reason why it should not be available for others, or configurable by admins.
|
@sqtran: No Bugzilla bug is referenced in the title of this pull request. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
Hi @sqtran. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: sqtran The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/assign @knobunc |
|
Is this the right branch to be working in? 4.2 is already released, 4.3 is coming soon, so the right place would be in 4.4? Let me know so I can port this to the right branch. |
|
Thanks for the PR! Can you describe the use-case? We use PROXY protocol on AWS for source address preservation; other cloud providers (at least Azure and GCP) preserve the source address without using PROXY protocol. In any case, an API change like this may require a proposal in https://github.com/openshift/enhancements and in any case needs a PR in https://github.com/openshift/api before we can implement it. Once we get to the implementation phase, the PR should be against the master branch (once merged, it can be backported to release branches if necessary). |
|
@Miciah I can certainly try to explain our need. |
|
@sbktc, can you describe your environment in a little more detail? Are you using the "HostNetwork" or "NodePortService" endpoint publishing strategy type? Do you mind describing how you configured the load balancer? An important goal is that the operator should automatically configure as much as possible. We want to avoid adding new API fields to specify some configuration if the operator can infer the appropriate configuration from information it already has or can get. However, if a new API is truly needed, we want to define it in the appropriate place. The ingresscontroller API defines several endpoint publishing strategy types: "LoadBalancerService" to use a LoadBalancer-type Kubernetes service, "NodePortService" to use a NodePort-type service, "HostNetwork" to listen on the node hosts' ports, and "Private" to listen only on the cluster network. Generally, "LoadBalancerService" is intended for cloud platforms, and "NodePortService" and "HostNetwork" are intended for bare metal. With "LoadBalancerService", the operator can determine whether PROXY protocol is needed based on the specific platform: Azure's and GCP's load balancers provide source preservation and therefore do not need PROXY protocol whereas AWS ELBs do not have source preservation and therefore do need PROXY protocol. Thus for "LoadBalancerService", we can determine from the platform whether or not we need PROXY protocol, and we don't need an API at all for this endpoint publishing strategy type. With "NodePortService" and "HostNetwork" (which cover bare-metal use-cases), the administrator typically configures an external load-balancer, which may or may not use PROXY protocol. In this case, is it possible for the operator somehow to infer whether or not the external load-balancer uses PROXY protocol? Can we avoid defining a new API? If not, where does it make the most sense to define a new API? I don't have any good ideas as to how the operator could detect whether or not to use PROXY protocol for "NodePortService" or "HostNetwork"; suggestions are welcome. Failing that, it seems that an API is desirable for the "NodePortService" and "HostNetwork" endpoint publishing strategy types but not for "LoadBalancerService" or "Private", right? By the way, there is an enhancement proposal to use MetalLB to enable LoadBalancer-type Kubernetes services on bare metal: openshift/enhancements#356. I have not yet considered the relationship between MetalLB and PROXY protocol; there might be some important implications there for how/if we need an API to configure PROXY protocol. |
|
@Miciah sure I'd like to try to elaborate a bit on our setup. I've been away on my summer holidays so is has not been possible to get back sooner. We have a pretty straight forward OCP setup, originally born out of a OCP v4.2 cluster (upgraded along the way to v4.5). The initial setup was based on https://www.openshift.com/blog/openshift-4-bare-metal-install-quickstart Added to that is a *-certificate for our *.apps.. which the default ingress router use for serving secured routes. Running Network -> ocp-lb-01.kb.dk (HAProxy L4 LB) -> worker{01,02,03}-ocp-test.kb.dk (HAProxy SSL termination L7 LB) -> pods Now I can't (currently) see a way for the ingresscontroller to determine if it should use the PROXY protocol or not, as it simply does not have any idea of the external load balancer. So I'm afraid that I can't contribute much with a solution, other than making it a configuration option. It would however be a solution to our issue, and RedHat have been so kind to take in an RFE (RFE-1097). |
|
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
|
I don't get why this PR hasn't been merged yet to expose to customers a feature that was already there back in OCP 3.x. |
|
@sqtran, maybe you should add code to handle the feature as an annotation on the CR like it's done for the HTTP2 feature in deployment.go: RouterDefaultEnableHTTP2Annotation = "ingress.operator.openshift.io/default-enable-http2"func HTTP2IsEnabledByAnnotation(m map[string]string) (bool, bool) {
if val, ok := m[RouterDefaultEnableHTTP2Annotation]; ok {
v, _ := strconv.ParseBool(val)
return true, v
}
return false, false
}func HTTP2IsEnabled(ic *operatorv1.IngressController, ingressConfig *configv1.Ingress) bool {
controllerHasHTTP2Annotation, controllerHasHTTP2Enabled := HTTP2IsEnabledByAnnotation(ic.Annotations)
_, configHasHTTP2Enabled := HTTP2IsEnabledByAnnotation(ingressConfig.Annotations)
if controllerHasHTTP2Annotation {
return controllerHasHTTP2Enabled
}
return configHasHTTP2Enabled
} |
|
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
|
/remove-lifecycle rotten |
|
When can we expect the proxy protocol for okd? |
|
@myg0v, OKD and OCP are on the same boat. Even though that PR is over a year old, the feature was being looked at by RH engineering for 4.7 and it has now been post poned to 4.8. |
|
This is an important change as without it haproxy.router.openshift.io/ip_whitelist annotations on routes are meaningless. There needs to be an effective way for users to add an IP whitelist to their route so they can control who has access to their application. |
|
An API to enable PROXY protocol was implemented in #581. |
|
@Miciah thanks for the update, but am I reading this correctly that this change is only for NodePortService? I am interested in enabling the PROXY protocol on the default ingress router. In my situation we have a user provisioned Level 4 HAProxy in front of our cluster using TCP mode as recommended in the OpenShift/OKD documentation. The problem is that without the PROXY protocol the cluster does not see the origin IP but sees the HAProxy IP address. As a result causing a major security headache as haproxy.router.openshift.io/ip_whitelist annotations are ineffective. Will this #581 change fix this? If it does it is not entirely clear how, will it require a reinstall of the cluster? |
|
Sorry partly answered my own question. On closer inspection of the changes I can see it does apply to hostNetwork (The default method for bare metal) as well so looks promising. The thing I remain uncertain about is, would I be able to add protocol:PROXY to the existing router-default deployment or will I have to add it to the manifest and reinstall? |
|
Right, PROXY protocol can be enabled when using a nodeport service or when using the host network. It should be possible to change it on an existing ingresscontroller; if that does not work, that is a defect that needs to be fixed. It should also be possible to delete and recreate an ingresscontroller, which will definitely cause the new ingresscontroller to get the new configuration. |
|
/close |
|
@sgreene570: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is enabled by default when the platform is AWS, but there is no reason
why it should not be available on others. This is a GAP between OCP3 and OCP4, and is important for any customers who are not using AWS.