Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
repos:
- repo: git://github.com/antonbabenko/pre-commit-terraform
rev: v1.19.0
rev: v1.22.0
hooks:
- id: terraform_fmt
- id: terraform_docs
args: [--args=--with-aggregate-type-defaults --no-escape]
- id: terraform_validate
- id: terraform_tflint
15 changes: 12 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,21 @@ project adheres to [Semantic Versioning](http://semver.org/).

## Next release

## [[v8.?.?](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v7.0.0...HEAD)] - 2019-??-??]
## [[v8.?.?](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v8.0.0...HEAD)] - 2019-12-11]

- Write your awesome change here (by @you)
- Add support for restricting access to the public API endpoint (@sidprak)

# History

## [[v8.0.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v8.0.0...v7.0.1)] - 2019-12-11]

- **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi)
- **Breaking:** Configure the aws-auth configmap using the terraform kubernetes providers. See Important notes below for upgrade notes (by @sdehaes)
- Wait for cluster to respond to kubectl before applying auth map_config (@shaunc)
- Added flag `create_eks` to conditionally create resources (by @syst0m / @tbeijen)
- Support for AWS EKS Managed Node Groups. (by @wmorgan6796)
- Added a if check on `aws-auth` configmap when `map_roles` is empty (by @shanmugakarna)
- **Breaking:** Configure the aws-auth configmap using the terraform kubernetes providers. See Important notes below for upgrade notes (by @sdehaes)
- Removed no longer used variable `write_aws_auth_config` (by @tbeijen)
- Exit with error code when `aws-auth` configmap is unable to be updated (by @knittingdev)
- Fix deprecated interpolation-only expression (by @angelabad)
Expand All @@ -25,7 +33,8 @@ project adheres to [Semantic Versioning](http://semver.org/).
- Added support to create IAM OpenID Connect Identity Provider to enable EKS Identity Roles for Service Accounts (IRSA). (by @alaa)
- Adding node group iam role arns to outputs. (by @mukgupta)
- Added the OIDC Provider ARN to outputs. (by @eytanhanig)
- **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi)
- Move `eks_node_group` resources to a submodule (by @dpiddockcmp)
- Add complex output `node_groups` (by @TBeijen)

#### Important notes

Expand Down
154 changes: 78 additions & 76 deletions README.md

Large diffs are not rendered by default.

7 changes: 2 additions & 5 deletions aws_auth.tf
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,10 @@ data "template_file" "worker_role_arns" {
}

data "template_file" "node_group_arns" {
count = var.create_eks ? local.worker_group_managed_node_group_count : 0
count = var.create_eks ? length(module.node_groups.aws_auth_roles) : 0
template = file("${path.module}/templates/worker-role.tpl")

vars = {
worker_role_arn = lookup(var.node_groups[count.index], "iam_role_arn", aws_iam_role.node_groups[0].arn)
platform = "linux" # Hardcoded because the EKS API currently only supports linux for managed node groups
}
vars = module.node_groups.aws_auth_roles[count.index]
}

resource "kubernetes_config_map" "aws_auth" {
Expand Down
3 changes: 2 additions & 1 deletion cluster.tf
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ resource "aws_eks_cluster" "this" {
subnet_ids = var.subnets
endpoint_private_access = var.cluster_endpoint_private_access
endpoint_public_access = var.cluster_endpoint_public_access
public_access_cidrs = var.cluster_endpoint_public_access_cidrs
}

timeouts {
Expand All @@ -33,7 +34,7 @@ resource "aws_eks_cluster" "this" {
]
provisioner "local-exec" {
command = <<EOT
until curl -k ${aws_eks_cluster.this[0].endpoint}/healthz >/dev/null; do sleep 4; done
until curl -k -s ${aws_eks_cluster.this[0].endpoint}/healthz >/dev/null; do sleep 4; done
EOT
}
}
Expand Down
25 changes: 13 additions & 12 deletions examples/managed_node_groups/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -92,27 +92,28 @@ module "eks" {

vpc_id = module.vpc.vpc_id

node_groups = [
{
name = "example"
node_groups_defaults = {
ami_type = "AL2_x86_64"
disk_size = 50
}

node_group_desired_capacity = 1
node_group_max_capacity = 10
node_group_min_capacity = 1
node_groups = {
example = {
desired_capacity = 1
max_capacity = 10
min_capacity = 1

instance_type = "m5.large"
node_group_k8s_labels = {
k8s_labels = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
node_group_additional_tags = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
additional_tags = {
ExtraTag = "example"
}
}
]
}

map_roles = var.map_roles
map_users = var.map_users
Expand Down
4 changes: 4 additions & 0 deletions examples/managed_node_groups/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,7 @@ output "region" {
value = var.region
}

output "node_groups" {
description = "Outputs from node groups"
value = module.eks.node_groups
}
17 changes: 2 additions & 15 deletions local.tf
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,8 @@ locals {
default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0]
kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name

worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)
worker_group_managed_node_group_count = length(var.node_groups)
worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)

default_ami_id_linux = data.aws_ami.eks_worker.id
default_ami_id_windows = data.aws_ami.eks_worker_windows.id
Expand Down Expand Up @@ -80,15 +79,6 @@ locals {
spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity.
spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify."
spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price
ami_type = "AL2_x86_64" # AMI Type to use for the Managed Node Groups. Can be either: AL2_x86_64 or AL2_x86_64_GPU
ami_release_version = "" # AMI Release Version of the Managed Node Groups
source_security_group_id = [] # Source Security Group IDs to allow SSH Access to the Nodes. NOTE: IF LEFT BLANK, AND A KEY IS SPECIFIED, THE SSH PORT WILL BE OPENNED TO THE WORLD
node_group_k8s_labels = {} # Kubernetes Labels to apply to the nodes within the Managed Node Group
node_group_desired_capacity = 1 # Desired capacity of the Node Group
node_group_min_capacity = 1 # Min capacity of the Node Group (Minimum value allowed is 1)
node_group_max_capacity = 3 # Max capacity of the Node Group
node_group_iam_role_arn = "" # IAM role to use for Managed Node Groups instead of default one created by the automation
node_group_additional_tags = {} # Additional tags to be applied to the Node Groups
}

workers_group_defaults = merge(
Expand Down Expand Up @@ -133,7 +123,4 @@ locals {
"t2.small",
"t2.xlarge"
]

node_groups = { for node_group in var.node_groups : node_group["name"] => node_group }

}
55 changes: 55 additions & 0 deletions modules/node_groups/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# eks `node_groups` submodule

Helper submodule to create and manage resources related to `eks_node_groups`.

## Assumptions
* Designed for use by the parent module and not directly by end users

## Node Groups' IAM Role
The role ARN specified in `var.default_iam_role_arn` will be used by default. In a simple configuration this will be the worker role created by the parent module.

`iam_role_arn` must be specified in either `var.node_groups_defaults` or `var.node_groups` if the default parent IAM role is not being created for whatever reason, for example if `manage_worker_iam_resources` is set to false in the parent.

## `node_groups` and `node_groups_defaults` keys
`node_groups_defaults` is a map that can take the below keys. Values will be used if not specified in individual node groups.

`node_groups` is a map of maps. Key of first level will be used as unique value for `for_each` resources and in the `aws_eks_node_group` name. Inner map can take the below values.

| Name | Description | Type | If unset |
|------|-------------|:----:|:-----:|
| additional\_tags | Additional tags to apply to node group | map(string) | Only `var.tags` applied |
| ami\_release\_version | AMI version of workers | string | Provider default behavior |
| ami\_type | AMI Type. See Terraform or AWS docs | string | Provider default behavior |
| desired\_capacity | Desired number of workers | number | `var.workers_group_defaults[asg_desired_capacity]` |
| disk\_size | Workers' disk size | number | Provider default behavior |
| iam\_role\_arn | IAM role ARN for workers | string | `var.default_iam_role_arn` |
| instance\_type | Workers' instance type | string | `var.workers_group_defaults[instance_type]` |
| k8s\_labels | Kubernetes labels | map(string) | No labels applied |
| key\_name | Key name for workers. Set to empty string to disable remote access | string | `var.workers_group_defaults[key_name]` |
| max\_capacity | Max number of workers | number | `var.workers_group_defaults[asg_max_size]` |
| min\_capacity | Min number of workers | number | `var.workers_group_defaults[asg_min_size]` |
| source\_security\_group\_ids | Source security groups for remote access to workers | list(string) | If key\_name is specified: THE REMOTE ACCESS WILL BE OPENED TO THE WORLD |
| subnets | Subnets to contain workers | list(string) | `var.workers_group_defaults[subnets]` |
| version | Kubernetes version | string | Provider default behavior |

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|:----:|:-----:|:-----:|
| cluster_name | Name of parent cluster | string | n/a | yes |
| create_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no |
| default_iam_role_arn | ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults` | string | n/a | yes |
| node_groups | Map of maps of `eks_node_groups` to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | `{}` | no |
| node_groups_defaults | map of maps of node groups to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | n/a | yes |
| tags | A map of tags to add to all resources | map(string) | n/a | yes |
| workers_group_defaults | Workers group defaults from parent | any | n/a | yes |

## Outputs

| Name | Description |
|------|-------------|
| aws_auth_roles | Roles for use in aws-auth ConfigMap |
| node_groups | Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values |

<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
16 changes: 16 additions & 0 deletions modules/node_groups/locals.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
locals {
# Merge defaults and per-group values to make code cleaner
node_groups_expanded = { for k, v in var.node_groups : k => merge(
{
desired_capacity = var.workers_group_defaults["asg_desired_capacity"]
iam_role_arn = var.default_iam_role_arn
instance_type = var.workers_group_defaults["instance_type"]
key_name = var.workers_group_defaults["key_name"]
max_capacity = var.workers_group_defaults["asg_max_size"]
min_capacity = var.workers_group_defaults["asg_min_size"]
subnets = var.workers_group_defaults["subnets"]
},
var.node_groups_defaults,
v,
) if var.create_eks }
}
49 changes: 49 additions & 0 deletions modules/node_groups/node_groups.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
resource "aws_eks_node_group" "workers" {
for_each = local.node_groups_expanded

node_group_name = join("-", [var.cluster_name, each.key, random_pet.node_groups[each.key].id])

cluster_name = var.cluster_name
node_role_arn = each.value["iam_role_arn"]
subnet_ids = each.value["subnets"]

scaling_config {
desired_size = each.value["desired_capacity"]
max_size = each.value["max_capacity"]
min_size = each.value["min_capacity"]
}

ami_type = lookup(each.value, "ami_type", null)
disk_size = lookup(each.value, "disk_size", null)
instance_types = [each.value["instance_type"]]
release_version = lookup(each.value, "ami_release_version", null)

dynamic "remote_access" {
for_each = each.value["key_name"] != "" ? [{
ec2_ssh_key = each.value["key_name"]
source_security_group_ids = lookup(each.value, "source_security_group_ids", [])
}] : []

content {
ec2_ssh_key = remote_access.value["ec2_ssh_key"]
source_security_group_ids = remote_access.value["source_security_group_ids"]
}
}

version = lookup(each.value, "version", null)

labels = merge(
lookup(var.node_groups_defaults, "k8s_labels", {}),
lookup(var.node_groups[each.key], "k8s_labels", {})
)

tags = merge(
var.tags,
lookup(var.node_groups_defaults, "additional_tags", {}),
lookup(var.node_groups[each.key], "additional_tags", {}),
)

lifecycle {
create_before_destroy = true
}
}
14 changes: 14 additions & 0 deletions modules/node_groups/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
output "node_groups" {
description = "Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values"
value = aws_eks_node_group.workers
}

output "aws_auth_roles" {
description = "Roles for use in aws-auth ConfigMap"
value = [
for k, v in local.node_groups_expanded : {
worker_role_arn = lookup(v, "iam_role_arn", var.default_iam_role_arn)
platform = "linux"
}
]
}
21 changes: 21 additions & 0 deletions modules/node_groups/random.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
resource "random_pet" "node_groups" {
for_each = local.node_groups_expanded

separator = "-"
length = 2

keepers = {
ami_type = lookup(each.value, "ami_type", null)
disk_size = lookup(each.value, "disk_size", null)
instance_type = each.value["instance_type"]
iam_role_arn = each.value["iam_role_arn"]

key_name = each.value["key_name"]

source_security_group_ids = join("|", compact(
lookup(each.value, "source_security_group_ids", [])
))
subnet_ids = join("|", each.value["subnets"])
node_group_name = join("-", [var.cluster_name, each.key])
}
}
36 changes: 36 additions & 0 deletions modules/node_groups/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
variable "create_eks" {
description = "Controls if EKS resources should be created (it affects almost all resources)"
type = bool
default = true
}

variable "cluster_name" {
description = "Name of parent cluster"
type = string
}

variable "default_iam_role_arn" {
description = "ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults`"
type = string
}

variable "workers_group_defaults" {
description = "Workers group defaults from parent"
type = any
}

variable "tags" {
description = "A map of tags to add to all resources"
type = map(string)
}

variable "node_groups_defaults" {
description = "map of maps of node groups to create. See \"`node_groups` and `node_groups_defaults` keys\" section in README.md for more details"
type = any
}

variable "node_groups" {
description = "Map of maps of `eks_node_groups` to create. See \"`node_groups` and `node_groups_defaults` keys\" section in README.md for more details"
type = any
default = {}
}
Loading