-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kuber-router does not work with loadbalancer #310
Comments
I found out that kube-router does not respect service with type loadbalancer and not create ipvs rules for this |
@tiukhtinvladimir Yes, it does not supprt service type LoadBalancer yet. There is an issue opened for it #242. However kube-router supports advertising cluster-ip and external-ip to the BGP peers. You could actually do ECMP load balancing. All the necessary functionality is there, it just needs to be made available through LoadBalancer service type as well. @roffe did try to make kube-router work with metallb. I am afraid, at this point MetalLb and kube-router may not work to gether. Same node can not have two BGP speakers (kube-router and metal LB). Also BGP export policies from kube-router will create problems. |
@murali-reddy thanks for your prompt response. I am using metallb in ARP mode (not BGP). So on a router side I do 1:1 NAT to IP advertising by metallb. Advantage is that metallb control the VIP and move it on node failure. |
@tiukhtinvladimir as @murali-reddy mentioned, you can simply add an external ip to the service definition to advertise a vip. If you just want a load balanced service ip, metallb isn't strictly necessary. Also, the arp mode isn't very scalable, per metallb's own docs. The bgp advertised externalip is quite scalable by comparison. I use this exact same trick to have a highly available vip for my ingress controllers. |
@murali-reddy this should be closed as a dupe of #242 |
MetalLB - load balancer for bare metal installations. It works perfect with kube-proxy but not with kube-router. Help! I like kube-router - do not want go back to kube-proxy
The text was updated successfully, but these errors were encountered: