Skip to content

Commit

Permalink
Update EKS example to use two applies
Browse files Browse the repository at this point in the history
In order to avoid provider configuration issues associated with a single-apply create, use two applies, and use the AWS provider rather than the EKS cluster module.
  • Loading branch information
dak1n1 committed May 7, 2021
1 parent 0dfb1f0 commit f74e08f
Show file tree
Hide file tree
Showing 15 changed files with 227 additions and 255 deletions.
55 changes: 27 additions & 28 deletions _examples/eks/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# EKS (Amazon Elastic Kubernetes Service)

This example shows how to use the Terraform Kubernetes Provider and Terraform Helm Provider to configure an EKS cluster. The example config builds the EKS cluster and applies the Kubernetes configurations in a single operation. This guide will also show you how to make changes to the underlying EKS cluster in such a way that Kuberntes/Helm resources are recreated after the underlying cluster is replaced.
This example demonstrates the most reliable way to use the Kubernetes provider together with the AWS provider to create an EKS cluster. By keeping the two providers' resources in separate Terraform states (or separate workspaces using [Terraform Cloud](https://app.terraform.io/)), we can limit the scope of impact to apply the right changes to the right place. (For example, updating the underlying EKS infrastructure without having to navigate the Kubernetes provider configuration challenges caused by modifying EKS cluster attributes in a single apply).

You will need the following environment variables to be set:

Expand All @@ -9,33 +9,27 @@ You will need the following environment variables to be set:

See [AWS Provider docs](https://www.terraform.io/docs/providers/aws/index.html#configuration-reference) for more details about these variables and alternatives, like `AWS_PROFILE`.

Ensure that `KUBE_CONFIG_FILE` and `KUBE_CONFIG_FILES` environment variables are NOT set, as they will interfere with the cluster build.

```
unset KUBE_CONFIG_FILE
unset KUBE_CONFIG_FILES
```
## Create EKS cluster

To install the EKS cluster using default values, run terraform init and apply from the directory containing this README.
Choose a name for the cluster, or use the terraform config in the current directory to create a random name.

```
terraform init
terraform apply
terraform apply --auto-approve
export CLUSTERNAME=$(terraform output -raw cluster_name)
```

## Kubeconfig for manual CLI access

This example generates a kubeconfig file in the current working directory. However, the token in this config expires in 15 minutes. The token can be refreshed by running `terraform apply` again. Export the KUBECONFIG to manually access the cluster:
Change into the eks-cluster directory and create the EKS cluster infrastrcture.

```
terraform apply
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl get pods -n test
cd eks-cluster
terraform init
terraform apply -var=cluster_name=$CLUSTERNAME
cd -
```

## Optional variables

The Kubernetes version can be specified at apply time:
Optionally, the Kubernetes version can be specified at apply time:

```
terraform apply -var=kubernetes_version=1.18
Expand All @@ -44,25 +38,30 @@ terraform apply -var=kubernetes_version=1.18
See https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html for currently available versions.


### Worker node count and instance type
## Create Kubernetes resources

The number of worker nodes, and the instance type, can be specified at apply time:
Change into the kubernetes-config directory to apply Kubernetes resources to the new cluster.

```
terraform apply -var=workers_count=4 -var=workers_type=m4.xlarge
cd kubernetes-config
terraform init
terraform apply -var=cluster_name=$CLUSTERNAME
```

## Additional configuration of EKS

To view all available configuration options for the EKS module used in this example, see [terraform-aws-modules/eks docs](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest).
## Deleting the cluster

## Replacing the EKS cluster and re-creating the Kubernetes / Helm resources
First, delete the Kubernetes resources as shown below. This will give Ingress and Service related Load Balancers a chance to delete before the other AWS resources are removed.

When the cluster is initially created, the Kubernetes and Helm providers will not be initialized until authentication details are created for the cluster. However, for future operations that may involve replacing the underlying cluster (for example, changing the network where the EKS cluster resides), the EKS cluster will have to be targeted without the Kubernetes/Helm providers, as shown below. This is done by removing the `module.kubernetes-config` from Terraform State prior to replacing cluster credentials, to avoid passing outdated credentials into the providers.
```
cd kubernetes-config
terraform destroy -var=cluster_name=$CLUSTERNAME
cd -
```

This will create the new cluster and the Kubernetes resources in a single apply.
Then delete the EKS related resources:

```
terraform state rm module.kubernetes-config
terraform apply
cd eks-cluster
terraform destroy -var=cluster_name=$CLUSTERNAME
cd -
```
36 changes: 36 additions & 0 deletions _examples/eks/eks-cluster/cluster.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
resource "aws_eks_cluster" "k8s-acc" {
name = var.cluster_name
role_arn = aws_iam_role.k8s-acc-cluster.arn

vpc_config {
subnet_ids = aws_subnet.k8s-acc.*.id
}

# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSVPCResourceController,
]
}

resource "aws_eks_node_group" "k8s-acc" {
cluster_name = aws_eks_cluster.k8s-acc.name
node_group_name = var.cluster_name
node_role_arn = aws_iam_role.k8s-acc-node.arn
subnet_ids = aws_subnet.k8s-acc.*.id

scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}

# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.k8s-acc-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.k8s-acc-AmazonEC2ContainerRegistryReadOnly,
]
}
60 changes: 60 additions & 0 deletions _examples/eks/eks-cluster/iam.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
resource "aws_iam_role" "k8s-acc-cluster" {
name = var.cluster_name

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.k8s-acc-cluster.name
}

# Optionally, enable Security Groups for Pods
# Reference: https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html
resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.k8s-acc-cluster.name
}

resource "aws_iam_role" "k8s-acc-node" {
name = "${var.cluster_name}-node"

assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}

resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.k8s-acc-node.name
}

resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.k8s-acc-node.name
}

resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.k8s-acc-node.name
}
3 changes: 3 additions & 0 deletions _examples/eks/eks-cluster/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
variable "cluster_name" {
type = string
}
10 changes: 10 additions & 0 deletions _examples/eks/eks-cluster/version.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.38.0"
}
}
}


28 changes: 7 additions & 21 deletions _examples/eks/vpc/main.tf → _examples/eks/eks-cluster/vpc.tf
Original file line number Diff line number Diff line change
@@ -1,32 +1,16 @@
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.22.0"
}
}
}

# Using these data sources allows the configuration to be
# generic for any region.
data "aws_region" "current" {
}

data "aws_availability_zones" "available" {
}

resource "random_id" "cluster_name" {
byte_length = 2
prefix = "k8s-acc-"
}

resource "aws_vpc" "k8s-acc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
"Name" = "terraform-eks-k8s-acc-node"
"kubernetes.io/cluster/${random_id.cluster_name.hex}" = "shared"
"Name" = "terraform-eks-k8s-acc-node"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
}

Expand All @@ -39,9 +23,9 @@ resource "aws_subnet" "k8s-acc" {
map_public_ip_on_launch = true

tags = {
"Name" = "terraform-eks-k8s-acc-node"
"kubernetes.io/cluster/${random_id.cluster_name.hex}" = "shared"
"kubernetes.io/role/elb" = 1
"Name" = "terraform-eks-k8s-acc-node"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
}

Expand All @@ -68,3 +52,5 @@ resource "aws_route_table_association" "k8s-acc" {
subnet_id = aws_subnet.k8s-acc[count.index].id
route_table_id = aws_route_table.k8s-acc.id
}


28 changes: 28 additions & 0 deletions _examples/eks/kubernetes-config/kubeconfig.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
apiVersion: v1
preferences: {}
kind: Config

clusters:
- cluster:
server: ${endpoint}
certificate-authority-data: ${clusterca}
name: ${cluster_name}

contexts:
- context:
cluster: ${cluster_name}
user: ${cluster_name}
name: ${cluster_name}

current-context: ${cluster_name}

users:
- name: ${cluster_name}
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- token
- --cluster-id
- ${cluster_name}
91 changes: 35 additions & 56 deletions _examples/eks/kubernetes-config/main.tf
Original file line number Diff line number Diff line change
@@ -1,69 +1,48 @@
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.3"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.1.0"
}
}
data "aws_eks_cluster" "default" {
name = var.cluster_name
}

resource "kubernetes_namespace" "test" {
metadata {
name = "test"
data "aws_eks_cluster_auth" "default" {
name = var.cluster_name
}

provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.default.token
}

provider "helm" {
kubernetes {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.default.token
}
}

resource "kubernetes_deployment" "test" {
resource "local_file" "kubeconfig" {
sensitive_content = templatefile("${path.module}/kubeconfig.tpl", {
cluster_name = var.cluster_name,
clusterca = data.aws_eks_cluster.default.certificate_authority[0].data,
endpoint = data.aws_eks_cluster.default.endpoint,
})
filename = "./kubeconfig-${var.cluster_name}"
}

resource "kubernetes_namespace" "test" {
metadata {
name = "test"
namespace= kubernetes_namespace.test.metadata.0.name
}
spec {
replicas = 2
selector {
match_labels = {
app = "test"
}
}
template {
metadata {
labels = {
app = "test"
}
}
spec {
container {
image = "nginx:1.19.4"
name = "nginx"

resources {
limits = {
memory = "512M"
cpu = "1"
}
requests = {
memory = "256M"
cpu = "50m"
}
}
}
}
}
}
}

resource helm_release nginx_ingress {
name = "nginx-ingress-controller"
resource "helm_release" "nginx_ingress" {
namespace = kubernetes_namespace.test.metadata.0.name
wait = true
timeout = 600

repository = "https://charts.bitnami.com/bitnami"
chart = "nginx-ingress-controller"
name = "ingress-nginx"

set {
name = "service.type"
value = "ClusterIP"
}
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = "v3.30.0"
}
3 changes: 3 additions & 0 deletions _examples/eks/kubernetes-config/output.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
output "kubeconfig" {
value = abspath("${path.root}/${local_file.kubeconfig.filename}")
}
Loading

0 comments on commit f74e08f

Please sign in to comment.