Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

Kubernetes multi-backend service loadbalancer #1343

Closed
wants to merge 11 commits into from
Closed

Kubernetes multi-backend service loadbalancer #1343

wants to merge 11 commits into from

Conversation

kokhang
Copy link

@kokhang kokhang commented Jul 11, 2016

This addresses issue #1506

High Available Multi-Backend Load balancer

The project implements a load balancer controller that will provide a high available and load balancing access to HTTP and TCP kubernetes applications. It will support different type of loadbalancer backends ranging from software LB such as nginx, to hardward LB such as F5 and cloud LB like Openstack LBaaS.

To create a loadbalancer, users will just have to create a configmap and provide two inputs:

  • Service name
  • Service namespace

A configmap targeted to the loadbalancer controller would look like:

apiVersion: v1
kind: ConfigMap
metadata:
    name: configmap-my-service
    labels:
        app: loadbalancer
data:
    namespace: "default"
    target-service-name: "my-service"

Once the configmap is created, a new backend is create and configured to loadbalance the provided service.

Our goal is to have this controller listen to ingress events, rather than config map for generating config rules. Currently, this controller watches for configmap resources to create and configure backend. Eventually, this will be changed to watch for ingress resource instead. This feature is still being planned in kubernetes since the current version of ingress does not support layer 4 routing.

This controller is designed to easily integrate and create different load balancing backends. From software, hardware to cloud loadbalancer. Our initial featured backends are software loadbalacing (with keepalived and nginx), hardware loadbalancing with F5 and cloud loadbalancing with Openstack LBaaS v2 (Octavia).

Software Loadbalancer via Daemon and VIPs

In the case of software loadbalancer, this controller works with loadbalancer-controller daemons which are deployed across nodes and will server as high available loadbalancers. The loadbalancer controller will communicate with the daemons via a configmap resource.
These daemon controllers use keepalived and nginx to provide the high availability loadbalancing via the use of VIPs. VIPS are allocated to every service that's being loadbalanced. This will allow multiple services that bind to the same ports to work.
For F5 and Openstack LBaaS, the loadbalancer controllers talk to the appropriate servers via their APIs. So loadbalancer-controller daemons are not needed.

Difference between this and service-loadbalancer or nginx.

Service-loadbalancer is a great option but it is only tailored for software loadbalancer using HAProxy and it is not designed in a way that it can be easily decoupled.

The nginx ingress controller is only for nginx and only works for layer 7 applications. This project is intended to provide support for many different backends and work with all kubernetes applications (layer 7 and layer4).

Service-loadbalancer support for L4 is very limited. The binding-port needs to be open and specified as a hostPort during the controller creation. This forces the users to specify and open the ports at the beginning. This will also prevent two different services to loadbalance on the same port (ie running two mysql services). This projects uses VIPs to resolve this limitation.

Examples:

Software Loadbalancer using keepalived and nginx

  1. First we need to create the loadbalancer controller.
$ kubectl create -f examples/kube-loadbalancer-rc.yaml
  1. The loadbalancer daemon pod will only start in nodes that are labeled type: loadbalancer. Label the nodes you want the daemon to run on
$ kubectl label node my-node1 type=loadbalancer
  1. Create our sample app, which consists of a service and replication controller resource:
$ kubectl create -f examples/coffee-app.yaml
  1. Create configmap for the sample app service. This will be used to configure the loadbalancer backend:
$ kubectl create -f coffee-configmap.yaml
  1. Get the bind IP generated by the loadbalancer controller from the configmap.
$ kubectl get configmap configmap-coffee-svc -o yaml
apiVersion: v1
data:
  bind-ip: "10.0.0.10"
  namespace: default
  target-service-name: coffee-svc
kind: ConfigMap
metadata:
  creationTimestamp: 2016-06-17T22:30:03Z
  labels:
    app: loadbalancer
  name: configmap-coffee-svc
  namespace: default
  resourceVersion: "157728"
  selfLink: /api/v1/namespaces/default/configmaps/configmap-coffee-svc
  uid: 08e12303-34db-11e6-87da-fa163eefe713
  1. To access your app:
 $ curl http://10.0.0.10
  <!DOCTYPE html>
  <html>
  <head>
  <title>Hello from NGINX!</title>
  <style>
      body {
          width: 35em;
          margin: 0 auto;
          font-family: Tahoma, Verdana, Arial, sans-serif;
      }
  </style>
  </head>
  <body>
  <h1>Hello!</h1>
  <h2>URI = /coffee</h2>
  <h2>My hostname is coffee-rc-mu9ns</h2>
  <h2>My address is 10.244.0.3:80</h2>
  </body>
  </html>

This change is Reviewable

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request) and all commit authors, but as best as we can tell these commits were authored by someone else. If that's the case, please add them to this pull request and have them confirm that they're okay with these commits being contributed to Google. If we're mistaken and you did author these commits, just reply here to confirm.

@thockin
Copy link
Contributor

thockin commented Jul 11, 2016

First, let me say thanks for participating. Community contribution is what makes this project fly.

That said, this is essentially unreviewable. It is far too large with far too much scope. GitHub can't even load many of the commits because they are too big. Specifically, the very first commit - I was hoping it would be a design doc explaining what this is about, what problems it is solving, what other projects it obsoletes, etc. It may well be that I am just overwhelmed at the idea of 63 commits and 1300+ files, but I can't even start to think about this.

Can you please ease us into it? Where's the problem statement and design discussion? Can you squash commits like "Updating README" into commits where they are relevant, or at least lump them together? A smaller number of fully-formed commits is better than a large number of fragments, but a single mega-PR is no good either. Maybe put godeps changes into wholly distinct commits, so we can avoid GitHub's UI for it.

Sorry to criticize superficially - I just don't know how to even start thinking about this one :)

@kokhang
Copy link
Author

kokhang commented Jul 11, 2016

Hi Tim,

Thank you for your feedback. This is my first PR to kubernetes project. I understand where you are coming from. I will squash some of the commits, separate out the vendors file into a different PR and construct a design doc about this project.

@thockin
Copy link
Contributor

thockin commented Jul 11, 2016

You don't need to split to different PRs, just different commits in this PR

On Mon, Jul 11, 2016 at 10:27 AM, Steve Leon [email protected]
wrote:

Hi Tim,

Thank you for your feedback. This is my first PR to kubernetes project. I
understand where you are coming from. I will squash some of the commits,
separate out the vendors file into a different PR and construct a design
doc about this project.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#1343 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AFVgVDkjevJ3SKz12nmMq-n65vYX7VRgks5qUnzsgaJpZM4JJksT
.

…nd Makefiles

- Adding vendor files
- Adding support for building container and linux target from non-Linux systems
  by leveraging docker
- Added makefile for loadbalancer/ingress-loadbalancer
- Added Dockerfile
- Added gitignore to loadbalancer subprojects
@kokhang
Copy link
Author

kokhang commented Jul 12, 2016

Hi Tim,
I have clean up the commits in this PR a bit. I have separated out the vendors in a different commit (the first one). Please let me know if there is anything else i can do to make it easier to get it reviewed.

Thanks,

@kokhang
Copy link
Author

kokhang commented Jul 12, 2016

Tim, how do I add @abithap to this PR so that googlebot accepts the CLA?
Thanks,

@eparis
Copy link
Contributor

eparis commented Jul 12, 2016

Only a human can work out the case in question and even then only when they decide to either click or not click the big green merge button. If/when Tim does review he'll deal with it. Nothing you can do about the CLA bot here.

@kokhang
Copy link
Author

kokhang commented Jul 14, 2016

@thockin Do you think this PR is in a better state where it could be reviewed?

Thanks

kokhang and others added 9 commits August 9, 2016 11:57
… backends

This controller is designed to easily integrate and create different load balancing backends.
From software, hardware to cloud loadbalancers.

For software loadbalancer, keepalived is used to manage VIPs so that software loadbalancers
can bind to them.

The software loadbalancers run in nodes with keepalived. They communicate with the loadbalancer controller
via configmaps.

- Added backend for provisioning Openstack Lbaas
- Nginxs added as loadbalancer-daemon backend
- Added a keepalived controller to manage VIPs for nginx backend
- Updated Dockerfile to include keepalived and its dependency packages
- Added example files
- Added README
- Create a daemon configmap that will be consumed by the loadbalancer-daemons to configure software loadbalancers
- Refactor the controller and moved logics to the backend controller implementation
- Make software loadbalancer (loadbalancer-daemon) the default backend
- Updated README file to include daemon backend for software loadbalancer
- Updated example files
- This implements a VIP controller which assign VIP from a pool to a service.
- The VIP is used by loadbalancer-daemon to provide keepalived and nginx the IP for binding the service
- Add bind-ip to the user's configmap
-- This is use to let the user know what the bind IP is
-- Updated examples to show bind-ip or vip in user configmap
- Added go routines to monitor the nginx and keepalived process.
- If either of them is killed, the main process will exit with no zero error code
- Kubernetes will then spawn a new loadbalancer controller process
- Create pool, members, monitor and virtual server for every kube server
- Added example for f5 controller
- Rename ingress-loadbalancer to kube-loadbalancer
- Update readme to include F5 backend
- Lbaas resources will not be deleted if the states of the kube nodes, services or configmap config are teh same
- If an error occurs during a loadbalancer creating, the error message and status will be added to the user's configmap
- Add authentication error logging for f5
- Make virtual router ID for keepalived configurable
- Added lock when updating configmap to prevent race conditions
- Made keepalived options to be configurable
@kokhang
Copy link
Author

kokhang commented Aug 9, 2016

@thockin @eparis: I would really like to get some tractions on this PR. I have updated the first comment to provide more information about this project. Please let me know if there is anything else that is unclear.

I have also organized the commits so that they are grouped together.
The first commit contains all godep vendors files and other building scripts (Makefile, dockerfile etc). That way you can use whatever tooling to avoid reading 1300+ vendor files.

Last, travis is happy with this. My last commit took care of lint and boilerplate header issues.

Please let me know if there is anything else i could do to make it easier for you to review.
Thanks,

@bgrant0607
Copy link
Contributor

cc @bprashanth

@kokhang
Copy link
Author

kokhang commented Aug 16, 2016

Since /contrib is going away (#762), I am closing this PR. This project will live in https://github.com/hpcloud/kubernetes-service-loadbalancer until it can be part of kubernetes-incubator.

@kokhang kokhang closed this Aug 16, 2016
@warmchang
Copy link
Contributor

mark

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants