-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to enable proxy protocol #699
Comments
@mcfedr TCP Proxy LB is not really meant for HTTP traffic. As a result, it does not make sense for us to support it here. |
@rramkumar1 Surely you cannot be serious. Here's a real world example that a lot of people out there use. Responses like these are the reason why AWS is eating your lunch. If a user comes to you with a feature request that is not only valid, but highly sought after the correct response is not CLOSEWONTFIX. |
Using it for HTTP traffic and would like to enable it via annotation too. |
@rramkumar1 When you run Istio on GKE, the ingress-gateway service will configure a TCP LB. Now, if you want to use Istio functions like Rate Limiting on IP basis, you need to know the source IP address. I hope that example might serve as a reasonable use case. |
+1 on this. We're currently planning to deploy a scenario like the one @dcherniv mentioned with traefik instead of nginx. Since there is no way to deploy traefik externally of the Kubernetes-Cluster by using e.g. keepalived (GCP Doc) - and the gcp integrated lb is the only way to go for a solutions like this - it's basically a deal breaker. Btw.. Using the gcp integrated http/s lb is no real world option as it lacks tons of features other ingress controllers offer. |
Why was this closed ? |
If you're looking for preservation of source IP in a GCE Network Load Balancer (L4) -> Ingress Nginx Controller (L7) -> Pod setup, you can achieve that without proxy protocol: kubernetes/ingress-nginx#3431 (comment) GCE NLB (L4) is not a proxy, so it does not need proxy protocol. Packets are forwarded straight to the VMs without SNAT. Further, your Ingress Nginx service should be In summary: To preserve source IP from TCP LB (NLB) to Nginx Ingress: use To preserve source IP from Ingress Nginx to your pods (L7): Use TCP Proxy LB (not an NLB) seems like it has other use cases. NLB is sufficient for the case described above. I hope this helps. |
@jbielick Does this really work? Wouldn't this actually mean the following:
result: no connection at all As you said an NLB is a L4 device which means it's part of the transport layer which in turn means it terminates the TCP connections with it's own IP an connects to the actual service with a new connection between the NLB an the service. Of course with the same payload. I doubt that the externalTrafficPolicy will solve this but will look further into it within the next weeks. Please correct me if I'm totally mistaken of course. Greetings, |
Yes, it works. I think termination of the connection would be Layer 5. The major thing to note is that the NLB does not terminate the TCP connection. And you are correct that the packets are returned directly from the VM but maintain the correct source IP on return (See "direct server return").
|
I am using this too. I think you will have to run the Ingress as Daemonset in this case though, so that it is present on all of the nodes. |
we already do this one, but when we rollout restarts the Istio pod, it causes a short downtime. |
We're using the Kong Kubernetes Ingress Controller and Kong seems to agree on the 2 solutions described by @jbielick above. However, as @zufardhiyaulhaq pointed out, the It seems that a load balancer with Ideally we should be able to configure a load balancer with It looks like there is support for this feature, would it be possible to reopen the issue? |
Seconding support to have this reconsidered. I've been deploying the LoadBalancer L4 Service, waiting to the LB instance to come online and manually enabling the proxy protocol on these instances to get this to work (ew). Thanks. |
@mmiller1 You should take a look at the post of @jbielick
Setting the Although there exists a whitepaper by kemp technologies somewhere I can't find it. Anyways: here is some background on DSR |
@Eifoen thanks for the response. Using the Local traffic policy doesn't fit our use-case. We can't tolerate any service unavailability when restarting the backing pods, kube-proxy handles this gracefully, but relying on the external load balancers health checking to remove a node from service when the pod is shut down will result in 10-20 seconds of dropped traffic. |
@mmiller1 I was to suggest, that you consider using another kind of ingress-proxy within your deployment as DeamonSet - but this would still not solve your problem, as the healthcheck of the TCP LB of GCP still apply. You might want to dig deeper into the configuration of service healthchecks and configure the Backend-CRD (which allows to specify the |
We use manually configured Load Balancers that have proxy-protocol enabled,
talking to externalTrafficPolicy: Cluster services (ingress-nginx), which
suites our use-case of preserving the client-ip address, as well as
avoiding downtime during rollouts of the controllers.
It would be nice if this didn't require manual configuration on our part.
…On Thu, Jan 12, 2023, 4:03 AM Eifoen ***@***.***> wrote:
@mmiller1 <https://github.com/mmiller1>
The proxy-protocol does not solve this. Even though you enable the proxy
protocol the node health checks of the GCP NLB do still apply in the same
way the do without the proxy protocol.
I was to suggest, that you consider using another kind of *ingress-proxy*
within your deployment as *DeamonSet* - but this would still not solve
your problem, as the healthcheck of the TCP LB of GCP still apply.
You might want to dig deeper into the configuration of service
healthchecks
<https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks>
and configure the Backend-CRD
<https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#direct_hc>
(which allows to specify the checkIntervalSec attribute) according to
your requirements.
—
Reply to this email directly, view it on GitHub
<#699 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGCTIAEIJQXAI6HOGP7AQULWR7CG3ANCNFSM4HAAWX2A>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
The manually configured LBs will be subject to the same helthcheck intervals as the GKE created ones. According to the explanation of your deployment the failover between pods is handled by ingress-nginx. The external LBs are just handling the failover between nginx pods (or the kube-proxy Cluster-IPs in your case). Thus your deployment is still subject to the (default 15 sec. as far as I know) timeout of the GCP LBs in case of the kubernetes-node (currently holding that IP) failing - at least if you did not change the default checkinterval of your manually deployed LBs. The fact that you might be using the kube-proxy as a middle man doesn't reduce your failover time in any way (at worst it might even lengthen it). Anyways - all of the above isn't in any way tied to the fact that you are using the proxy protocol. It is however tied to the fact that you might have reduced the healthcheck interval in your manually configured LBs. This, however, can already be configured automatically by using the Backend-CRDs a far as I'm concerned. In combination with using the automatically deployed service LBs with As I said:
|
We would also like the ability to enable proxy protocol with a Service annotation so that identifying the client source IP is possible for any TCP application. Unfortunately, without proxy protocol support, it is simply not possible to masquerade Pod IPs for outbound connections while preserving client source IP.
IP masquerade is rather necessary when Pods make connections that can only accept Node IP addresses, which is defined as the purpose of the If proxy protocol is not necessary in this configuration, then we would very much like to know the alternative configuration that will provide the client source IP. |
a Pod that connects to a Service loadbalancer on the same cluster will see the PodIP, because the traffic is shortcuted and it not leaving the cluster |
Yes, that's true but not the situation we have.
|
I miss somem details here, but how is this related to the Service Loadbalancer type?
you have to use externalTrafficPolicy: Local, that was explicitly added to preserver client IP |
This setting on the service is not relevant in this case. We have this enabled but when IP masquerading is enabled, source IP is not available. See my previous comment (#699 (comment)) for the quote from the Google Cloud documentation. |
with kube-proxy in recent versions, the traffic is shotcutted so it will not hit the ipmasquerade rules, |
We experience in a production cluster that when IP masquerade is enabled and a Service set with |
Hmm 🤔 that should not happen... Is this a gke cluster? Have you open an issue with support about it? I can take a look if I get the issue number |
Thanks for the offer @aojea , I'll send you a message directly. |
Is there a way to enable proxy protocol through Kube Service annotations? I want to use Proxy Protocol for reading the Private Service Connection IDs for incoming internal traffic. (and proxy protocol is the only way to get this information from incoming traffic) Both AWS and Azure have annotations to do this. |
The google cloud load balancer can do it, so seems it would just need an annotation to enable.
https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#proxy-protocol
The text was updated successfully, but these errors were encountered: