Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update EKS example to use two applies #1260

Merged
merged 3 commits into from
May 13, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 28 additions & 29 deletions _examples/eks/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# EKS (Amazon Elastic Kubernetes Service)

This example shows how to use the Terraform Kubernetes Provider and Terraform Helm Provider to configure an EKS cluster. The example config builds the EKS cluster and applies the Kubernetes configurations in a single operation. This guide will also show you how to make changes to the underlying EKS cluster in such a way that Kuberntes/Helm resources are recreated after the underlying cluster is replaced.
This example demonstrates the most reliable way to use the Kubernetes provider together with the AWS provider to create an EKS cluster. By keeping the two providers' resources in separate Terraform states (or separate workspaces using [Terraform Cloud](https://app.terraform.io/)), we can limit the scope of changes to either the EKS cluster or the Kubernetes resources. This will prevent dependency issues between the AWS and Kubernetes providers, since terraform's [provider configurations must be known before a configuration can be applied](https://www.terraform.io/docs/language/providers/configuration.html).

You will need the following environment variables to be set:

Expand All @@ -9,60 +9,59 @@ You will need the following environment variables to be set:

See [AWS Provider docs](https://www.terraform.io/docs/providers/aws/index.html#configuration-reference) for more details about these variables and alternatives, like `AWS_PROFILE`.

Ensure that `KUBE_CONFIG_FILE` and `KUBE_CONFIG_FILES` environment variables are NOT set, as they will interfere with the cluster build.

```
unset KUBE_CONFIG_FILE
unset KUBE_CONFIG_FILES
```
## Create EKS cluster

To install the EKS cluster using default values, run terraform init and apply from the directory containing this README.
Choose a name for the cluster, or use the terraform config in the current directory to create a random name.

```
terraform init
terraform apply
terraform apply --auto-approve
export CLUSTERNAME=$(terraform output -raw cluster_name)
```

## Kubeconfig for manual CLI access

This example generates a kubeconfig file in the current working directory. However, the token in this config expires in 15 minutes. The token can be refreshed by running `terraform apply` again. Export the KUBECONFIG to manually access the cluster:
Change into the eks-cluster directory and create the EKS cluster infrastructure.

```
terraform apply
export KUBECONFIG=$(terraform output -raw kubeconfig_path)
kubectl get pods -n test
cd eks-cluster
terraform init
terraform apply -var=cluster_name=$CLUSTERNAME
cd -
```

## Optional variables

The Kubernetes version can be specified at apply time:
Optionally, the Kubernetes version can be specified at apply time:

```
terraform apply -var=kubernetes_version=1.18
terraform apply -var=cluster_name=$CLUSTERNAME -var=kubernetes_version=1.18
```

See https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html for currently available versions.


### Worker node count and instance type
## Create Kubernetes resources

The number of worker nodes, and the instance type, can be specified at apply time:
Change into the kubernetes-config directory to apply Kubernetes resources to the new cluster.

```
terraform apply -var=workers_count=4 -var=workers_type=m4.xlarge
cd kubernetes-config
terraform init
terraform apply -var=cluster_name=$CLUSTERNAME
```

## Additional configuration of EKS

To view all available configuration options for the EKS module used in this example, see [terraform-aws-modules/eks docs](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest).
## Deleting the cluster

## Replacing the EKS cluster and re-creating the Kubernetes / Helm resources
First, delete the Kubernetes resources as shown below. This will give Ingress and Service related Load Balancers a chance to delete before the other AWS resources are removed.

When the cluster is initially created, the Kubernetes and Helm providers will not be initialized until authentication details are created for the cluster. However, for future operations that may involve replacing the underlying cluster (for example, changing the network where the EKS cluster resides), the EKS cluster will have to be targeted without the Kubernetes/Helm providers, as shown below. This is done by removing the `module.kubernetes-config` from Terraform State prior to replacing cluster credentials, to avoid passing outdated credentials into the providers.
```
cd kubernetes-config
terraform destroy -var=cluster_name=$CLUSTERNAME
cd -
```

This will create the new cluster and the Kubernetes resources in a single apply.
Then delete the EKS related resources:

```
terraform state rm module.kubernetes-config
terraform apply
cd eks-cluster
terraform destroy -var=cluster_name=$CLUSTERNAME
cd -
```
37 changes: 37 additions & 0 deletions _examples/eks/eks-cluster/cluster.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
resource "aws_eks_cluster" "k8s-acc" {
name = var.cluster_name
version = var.kubernetes_version
role_arn = aws_iam_role.k8s-acc-cluster.arn

vpc_config {
subnet_ids = aws_subnet.k8s-acc.*.id
}

# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSVPCResourceController,
]
}

resource "aws_eks_node_group" "k8s-acc" {
cluster_name = aws_eks_cluster.k8s-acc.name
node_group_name = var.cluster_name
node_role_arn = aws_iam_role.k8s-acc-node.arn
subnet_ids = aws_subnet.k8s-acc.*.id

scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}

# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.k8s-acc-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.k8s-acc-AmazonEC2ContainerRegistryReadOnly,
]
}
60 changes: 60 additions & 0 deletions _examples/eks/eks-cluster/iam.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
resource "aws_iam_role" "k8s-acc-cluster" {
name = var.cluster_name

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.k8s-acc-cluster.name
}

# Optionally, enable Security Groups for Pods
# Reference: https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html
resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.k8s-acc-cluster.name
}

resource "aws_iam_role" "k8s-acc-node" {
name = "${var.cluster_name}-node"

assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}

resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.k8s-acc-node.name
}

resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.k8s-acc-node.name
}

resource "aws_iam_role_policy_attachment" "k8s-acc-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.k8s-acc-node.name
}
8 changes: 8 additions & 0 deletions _examples/eks/eks-cluster/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
variable "cluster_name" {
type = string
}

variable "kubernetes_version" {
type = string
default = "1.19"
}
10 changes: 10 additions & 0 deletions _examples/eks/eks-cluster/version.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.38.0"
}
}
}


28 changes: 7 additions & 21 deletions _examples/eks/vpc/main.tf → _examples/eks/eks-cluster/vpc.tf
Original file line number Diff line number Diff line change
@@ -1,32 +1,16 @@
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.22.0"
}
}
}

# Using these data sources allows the configuration to be
# generic for any region.
data "aws_region" "current" {
}

data "aws_availability_zones" "available" {
}

resource "random_id" "cluster_name" {
byte_length = 2
prefix = "k8s-acc-"
}

resource "aws_vpc" "k8s-acc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
"Name" = "terraform-eks-k8s-acc-node"
"kubernetes.io/cluster/${random_id.cluster_name.hex}" = "shared"
"Name" = "terraform-eks-k8s-acc-node"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
}

Expand All @@ -39,9 +23,9 @@ resource "aws_subnet" "k8s-acc" {
map_public_ip_on_launch = true

tags = {
"Name" = "terraform-eks-k8s-acc-node"
"kubernetes.io/cluster/${random_id.cluster_name.hex}" = "shared"
"kubernetes.io/role/elb" = 1
"Name" = "terraform-eks-k8s-acc-node"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
}

Expand All @@ -68,3 +52,5 @@ resource "aws_route_table_association" "k8s-acc" {
subnet_id = aws_subnet.k8s-acc[count.index].id
route_table_id = aws_route_table.k8s-acc.id
}


28 changes: 28 additions & 0 deletions _examples/eks/kubernetes-config/kubeconfig.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
apiVersion: v1
preferences: {}
kind: Config

clusters:
- cluster:
server: ${endpoint}
certificate-authority-data: ${clusterca}
name: ${cluster_name}

contexts:
- context:
cluster: ${cluster_name}
user: ${cluster_name}
name: ${cluster_name}

current-context: ${cluster_name}

users:
- name: ${cluster_name}
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- token
- --cluster-id
- ${cluster_name}
Loading