This is my implementation of Kubernetes The Hard Way in Terraform on Microsoft Azure instead of Google Cloud Platform.
I'm doing this for
- The learning experience of setting Kubernetes up the hard way in a different Cloud than GCP (Azure in my case)
- To see how deep I can dig into Terraform in the process of doing the above
- To share what I learnt with the world
This is a work in progress and I update it whenever I find time to do so.
Most probably there will be multiple refactors of the Terraform code, every time I think there might be another way to carry out a certain task.
You need to have kubectl installed for interacting with the cluster and also for Terraform to be able to generate configuration files.
Download the current release from https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl
Also, you obviously need to have Terraform installed for running the code in this repository.
Download the current version from https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
Next, for Terraform to be able to authenticate with Azure, you either need to set up a service principal in your Azure account (https://www.terraform.io/docs/providers/azurerm/authenticating_via_service_principal.html) or have Azure CLI installed (https://www.terraform.io/docs/providers/azurerm/authenticating_via_azure_cli.html)
The private key to node_ssh_key
for accessing the VMs needs to be loaded in your SSH agent (for copying over the certificates to the instances).
This is achieved with the following Terraform modules:
- modules/vnet
- This module creates the overall resource group with supporting infrastructure, like:
- a VNET
- a Subnet
- some default security groups for SSH & HTTPS access
- modules/lb
- This module creates a load balancer which is either put in front of the:
- master nodes for apiserver access
- worker nodes for later workload access
- modules/lb_rule
- This module creates a rule for an existing load balancer (for now SSH access to provision the certs & consul to our instances)
- modules/vms
- This module creates the actual VMs and takes some parameters for naming, instance size, networking & load-balancer allocation
- This currently deploys Ubuntu 16.04 VMs with a Consul agent running
This is achieved with the following Terraform module:
- This module creates the PKI infrastructure for the cluster:
- the Certificate Authority
- an Admin Client certificate
- a Kubelet Client certificate per node
- the kube-proxy Client certificate
- the apiserver Server certificate
- also, it distributes the keys and certificates to the appropriate instances
- the CA key & cert to the apiserver instances
- the apiserver key & cert to the apiserver instances
- the kubelet keys & certs per node to the worker instances
- the CA cert to each worker instance
This is achieved with the following Terraform module:
- This module creates the client authentication configs for the kubelet and kube-proxy
- it does so by calling the appropriate
kubectl config
commands - and copying the kubeconfig files to each node
- it does so by calling the appropriate
This is achieved with the following Terraform module:
- This module creates the encryption config YAML file and copies it to the master nodes
This is achieved with the following Terraform module:
- This module bootstraps the etcd cluster on the controller nodes
We construct a fake dependency with the apiserver & CA certs here by using the internal resource IDs of those (after their respective creation) as input for the etcd module, so we can ensure that the etcd module will only be created once the certificate files are available (as we need to copy them into the etcd config dir).