The Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management.
This project is a Cluster API Infrastructure Provider implementation using Kubernetes itself to provide the infrastructure. Pods using the kindest/node image built for kind are created and configured to serve as Nodes which form a cluster.
The primary use cases for this project are testing and experimentation.
We will deploy a Kubernetes cluster to provide the infrastructure, install the Cluster API controllers and configure an example Kubernetes cluster using the Cluster API and the Kubernetes infrastructure provider. We will refer to the infrastructure cluster as the outer cluster and the Cluster API cluster as the inner cluster.
Any recent Kubernetes cluster (1.16+) should be suitable for the outer cluster.
We are going to use Calico as an
overlay implementation for the inner cluster with IP-in-IP
encapsulation
enabled so that our outer cluster does not need to know about the inner cluster's Pod IP range. To
make this work we need to ensure that the ipip
kernel module is loadable and that IPv4
encapsulated packets are forwarded by the kernel.
On GKE this can be accomplished as follows:
# The GKE Ubuntu image includes the ipip kernel module
# Calico handles loading the module if necessary
# https://github.com/projectcalico/felix/blob/9469e77e0fa530523be915dfaa69cc42d30b8317/dataplane/linux/ipip_mgr.go#L107-L110
MANAGEMENT_CLUSTER_NAME="management"
gcloud container clusters create $MANAGEMENT_CLUSTER_NAME \
--image-type=UBUNTU \
--machine-type=n1-standard-2
# Allow IP-in-IP traffic between outer cluster Nodes from inner cluster Pods
CLUSTER_CIDR=`gcloud container clusters describe $MANAGEMENT_CLUSTER_NAME --format="value(clusterIpv4Cidr)"`
gcloud compute firewall-rules create allow-$MANAGEMENT_CLUSTER_NAME-cluster-pods-ipip \
--source-ranges=$CLUSTER_CIDR \
--allow=ipip
# Forward IPv4 encapsulated packets
kubectl apply -f hack/forward-ipencap.yaml
# Install clusterctl
# https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl
CLUSTER_API_VERSION=v0.3.15
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/$CLUSTER_API_VERSION/clusterctl-`uname -s | tr '[:upper:]' '[:lower:]'`-amd64 -o clusterctl
chmod +x ./clusterctl
sudo mv ./clusterctl /usr/local/bin/clusterctl
# Configure the Kubernetes infrastructure provider
mkdir -p $HOME/.cluster-api
cat > $HOME/.cluster-api/clusterctl.yaml <<EOF
providers:
- name: kubernetes
url: https://github.com/dippynark/cluster-api-provider-kubernetes/releases/latest/infrastructure-components.yaml
type: InfrastructureProvider
EOF
# Initialise
clusterctl init --infrastructure kubernetes
CLUSTER_NAME="example"
# Use ClusterIP for clusters that do not support Services of type LoadBalancer
export KUBERNETES_CONTROL_PLANE_SERVICE_TYPE="LoadBalancer"
export KUBERNETES_CONTROLLER_MACHINE_CPU_REQUEST="500m"
export KUBERNETES_CONTROLLER_MACHINE_MEMORY_REQUEST="1Gi"
export KUBERNETES_WORKER_MACHINE_CPU_REQUEST="200m"
export KUBERNETES_WORKER_MACHINE_MEMORY_REQUEST="512Mi"
# See kind releases for other available image versions of kindest/node
# https://github.com/kubernetes-sigs/kind/releases
clusterctl config cluster $CLUSTER_NAME \
--infrastructure kubernetes \
--kubernetes-version 1.17.0 \
--control-plane-machine-count 1 \
--worker-machine-count 1 \
| kubectl apply -f -
# Retrieve kubeconfig
until [ -n "`kubectl get secret $CLUSTER_NAME-kubeconfig -o jsonpath='{.data.value}' 2>/dev/null`" ] ; do
sleep 1
done
kubectl get secret $CLUSTER_NAME-kubeconfig -o jsonpath='{.data.value}' | base64 --decode > $CLUSTER_NAME-kubeconfig
# Switch to new Kubernetes cluster. If the cluster API Server endpoint is not reachable from your
# local machine you can exec into a controller Node (Pod) and run
# `export KUBECONFIG=/etc/kubernetes/admin.conf` instead
export KUBECONFIG=$CLUSTER_NAME-kubeconfig
# Wait for the API Server to come up
until kubectl get nodes &>/dev/null; do
sleep 1
done
# Install Calico. This could also be done using a ClusterResourceSet
# https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# Interact with your new cluster!
kubectl get nodes
unset KUBECONFIG
rm -f $CLUSTER_NAME-kubeconfig
kubectl delete cluster $CLUSTER_NAME
# If using the GKE example above
yes | gcloud compute firewall-rules delete allow-$MANAGEMENT_CLUSTER_NAME-cluster-pods-ipip
yes | gcloud container clusters delete $MANAGEMENT_CLUSTER_NAME --async
- Implement finalizer for control plane Pods to prevent deletion that'd lose quorum (i.e. PDB)
- Work out why KCP replicas 3 has 0 failure tolerance
- Improve performance of control plane
- Improve recovery of persistent control plane with 3 nodes
- Use Services to keep consistent etcd hostname? This would help if all control plane nodes are deleted at once
- Default cluster service type to ClusterIP