Skip to content

Commit

Permalink
Upgrade to terraform 0.12 (#394)
Browse files Browse the repository at this point in the history
* run terraform upgrade tool

* fix post upgrade TODOs

* use strict typing for variables

* upgrade examples, point them at VPC module tf 0.12 PR

* remove unnecessary `coalesce()` calls

coalesce(lookup(map, key, ""), default) -> lookup(map, key, default)

* Fix autoscaling_enabled broken (#1)

* always set a value for tags, fix coalescelist calls

* always set a value for these tags

* fix tag value

* fix tag value

* default element available

* added default value

* added a general default

without this default - TF is throwing an error when running a destroy

* Fix CI

* Change vpc module back to `terraform-aws-modules/vpc/aws` in example

* Update CHANGELOG.md

* Change type of variable `cluster_log_retention_in_days` to number

* Remove `xx_count` variables

* Actual lists instead of strings with commas

* Remove `xx_count` variable from docs

* Replace element with list indexing

* Change variable `worker_group_tags` to a attribute of worker_group

* Fix workers_launch_template_mixed tags

* Change override_instance_type_x variables to list.

* Update CHANGELOG.md
  • Loading branch information
nauxliu authored and max-rocket-internet committed Jun 19, 2019
1 parent 3f06015 commit da2c78b
Show file tree
Hide file tree
Showing 24 changed files with 1,275 additions and 636 deletions.
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ install:
- bundle install

before_script:
- export TERRAFORM_VERSION=0.11.14
- export TERRAFORM_VERSION=0.12.2
- curl --silent --output terraform.zip "https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip"
- unzip terraform.zip ; rm -f terraform.zip; chmod +x terraform
- mkdir -p ${HOME}/bin ; export PATH=${PATH}:${HOME}/bin; mv terraform ${HOME}/bin/
Expand Down
7 changes: 6 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ project adheres to [Semantic Versioning](http://semver.org/).

## Next release

## [[v4.?.?](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v4.0.0...HEAD)] - 2019-06-??]
## [[v5.0.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v4.0.2...HEAD)] - 2019-06-??]

### Added

Expand All @@ -18,6 +18,11 @@ project adheres to [Semantic Versioning](http://semver.org/).

### Changed

- Finally, Terraform 0.12 support, [Upgrade Guide](https://github.com/terraform-aws-modules/terraform-aws-eks/pull/394) (by @alex-goncharov @nauxliu @timboven)
- All the xx_count variables have been removed (by @nauxliu on behalf of RightCapital)
- Use actual lists in the workers group maps instead of strings with commas (by @nauxliu on behalf of RightCapital)
- Move variable `worker_group_tags` to workers group's attribute `tags` (by @nauxliu on behalf of RightCapital)
- Change override instance_types to list (by @nauxliu on behalf of RightCapital)
- Fix toggle for IAM instance profile creation for mixed launch templates (by @jnozo)

# History
Expand Down
13 changes: 5 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,11 @@ module "my-cluster" {
{
instance_type = "m4.large"
asg_max_size = 5
tags = {
key = "foo"
value = "bar"
propagate_at_launch = true
}
}
]
Expand Down Expand Up @@ -130,29 +135,21 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| manage\_cluster\_iam\_resources | Whether to let the module manage cluster IAM resources. If set to false, cluster_iam_role_name must be specified. | string | `"true"` | no |
| manage\_worker\_iam\_resources | Whether to let the module manage worker IAM resources. If set to false, iam_instance_profile_name must be specified for workers. | string | `"true"` | no |
| map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list | `[]` | no |
| map\_accounts\_count | The count of accounts in the map_accounts list. | string | `"0"` | no |
| map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list | `[]` | no |
| map\_roles\_count | The count of roles in the map_roles list. | string | `"0"` | no |
| map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list | `[]` | no |
| map\_users\_count | The count of roles in the map_users list. | string | `"0"` | no |
| permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string | `""` | no |
| subnets | A list of subnets to place the EKS cluster and workers within. | list | n/a | yes |
| tags | A map of tags to add to all resources. | map | `{}` | no |
| vpc\_id | VPC where the cluster and workers will be deployed. | string | n/a | yes |
| worker\_additional\_security\_group\_ids | A list of additional security group ids to attach to worker instances | list | `[]` | no |
| worker\_ami\_name\_filter | Additional name filter for AWS EKS worker AMI. Default behaviour will get latest for the cluster_version but could be set to a release from amazon-eks-ami, e.g. "v20190220" | string | `"v*"` | no |
| worker\_create\_security\_group | Whether to create a security group for the workers or attach the workers to `worker_security_group_id`. | string | `"true"` | no |
| worker\_group\_count | The number of maps contained within the worker_groups list. | string | `"1"` | no |
| worker\_group\_launch\_template\_count | The number of maps contained within the worker_groups_launch_template list. | string | `"0"` | no |
| worker\_group\_launch\_template\_mixed\_count | The number of maps contained within the worker_groups_launch_template_mixed list. | string | `"0"` | no |
| worker\_group\_tags | A map defining extra tags to be applied to the worker group ASG. | map | `{ "default": [] }` | no |
| worker\_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers_group_defaults for valid keys. | list | `[ { "name": "default" } ]` | no |
| worker\_groups\_launch\_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | list | `[ { "name": "default" } ]` | no |
| worker\_groups\_launch\_template\_mixed | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | list | `[ { "name": "default" } ]` | no |
| worker\_security\_group\_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `""` | no |
| worker\_sg\_ingress\_from\_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | string | `"1025"` | no |
| workers\_additional\_policies | Additional policies to be added to workers | list | `[]` | no |
| workers\_additional\_policies\_count | | string | `"0"` | no |
| workers\_group\_defaults | Override default values for target groups. See workers_group_defaults_defaults in local.tf for valid keys. | map | `{}` | no |
| write\_aws\_auth\_config | Whether to write the aws-auth configmap file. | string | `"true"` | no |
| write\_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`. | string | `"true"` | no |
Expand Down
143 changes: 90 additions & 53 deletions aws_auth.tf
Original file line number Diff line number Diff line change
@@ -1,103 +1,140 @@
resource "local_file" "config_map_aws_auth" {
count = "${var.write_aws_auth_config ? 1 : 0}"
content = "${data.template_file.config_map_aws_auth.rendered}"
count = var.write_aws_auth_config ? 1 : 0
content = data.template_file.config_map_aws_auth.rendered
filename = "${var.config_output_path}config-map-aws-auth_${var.cluster_name}.yaml"
}

resource "null_resource" "update_config_map_aws_auth" {
count = "${var.manage_aws_auth ? 1 : 0}"
depends_on = ["aws_eks_cluster.this"]
count = var.manage_aws_auth ? 1 : 0
depends_on = [aws_eks_cluster.this]

provisioner "local-exec" {
working_dir = "${path.module}"
working_dir = path.module

command = <<EOS
for i in `seq 1 10`; do \
echo "${null_resource.update_config_map_aws_auth.triggers.kube_config_map_rendered}" > kube_config.yaml & \
echo "${null_resource.update_config_map_aws_auth.triggers.config_map_rendered}" > aws_auth_configmap.yaml & \
echo "${null_resource.update_config_map_aws_auth[0].triggers.kube_config_map_rendered}" > kube_config.yaml & \
echo "${null_resource.update_config_map_aws_auth[0].triggers.config_map_rendered}" > aws_auth_configmap.yaml & \
kubectl apply -f aws_auth_configmap.yaml --kubeconfig kube_config.yaml && break || \
sleep 10; \
done; \
rm aws_auth_configmap.yaml kube_config.yaml;
EOS

interpreter = ["${var.local_exec_interpreter}"]

interpreter = var.local_exec_interpreter
}

triggers {
kube_config_map_rendered = "${data.template_file.kubeconfig.rendered}"
config_map_rendered = "${data.template_file.config_map_aws_auth.rendered}"
endpoint = "${aws_eks_cluster.this.endpoint}"
triggers = {
kube_config_map_rendered = data.template_file.kubeconfig.rendered
config_map_rendered = data.template_file.config_map_aws_auth.rendered
endpoint = aws_eks_cluster.this.endpoint
}
}

data "aws_caller_identity" "current" {}
data "aws_caller_identity" "current" {
}

data "template_file" "launch_template_mixed_worker_role_arns" {
count = "${var.worker_group_launch_template_mixed_count}"
template = "${file("${path.module}/templates/worker-role.tpl")}"

vars {
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${element(coalescelist(aws_iam_instance_profile.workers_launch_template_mixed.*.role, data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.role_name), count.index)}"
count = local.worker_group_launch_template_mixed_count
template = file("${path.module}/templates/worker-role.tpl")

vars = {
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${element(
coalescelist(
aws_iam_instance_profile.workers_launch_template_mixed.*.role,
data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.role_name,
),
count.index,
)}"
}
}

data "template_file" "launch_template_worker_role_arns" {
count = "${var.worker_group_launch_template_count}"
template = "${file("${path.module}/templates/worker-role.tpl")}"

vars {
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${element(coalescelist(aws_iam_instance_profile.workers_launch_template.*.role, data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile.*.role_name), count.index)}"
count = local.worker_group_launch_template_count
template = file("${path.module}/templates/worker-role.tpl")

vars = {
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${element(
coalescelist(
aws_iam_instance_profile.workers_launch_template.*.role,
data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile.*.role_name,
),
count.index,
)}"
}
}

data "template_file" "worker_role_arns" {
count = "${var.worker_group_count}"
template = "${file("${path.module}/templates/worker-role.tpl")}"

vars {
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${element(coalescelist(aws_iam_instance_profile.workers.*.role, data.aws_iam_instance_profile.custom_worker_group_iam_instance_profile.*.role_name), count.index)}"
count = local.worker_group_count
template = file("${path.module}/templates/worker-role.tpl")

vars = {
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${element(
coalescelist(
aws_iam_instance_profile.workers.*.role,
data.aws_iam_instance_profile.custom_worker_group_iam_instance_profile.*.role_name,
[""]
),
count.index,
)}"
}
}

data "template_file" "config_map_aws_auth" {
template = "${file("${path.module}/templates/config-map-aws-auth.yaml.tpl")}"

vars {
worker_role_arn = "${join("", distinct(concat(data.template_file.launch_template_worker_role_arns.*.rendered, data.template_file.worker_role_arns.*.rendered, data.template_file.launch_template_mixed_worker_role_arns.*.rendered)))}"
map_users = "${join("", data.template_file.map_users.*.rendered)}"
map_roles = "${join("", data.template_file.map_roles.*.rendered)}"
map_accounts = "${join("", data.template_file.map_accounts.*.rendered)}"
template = file("${path.module}/templates/config-map-aws-auth.yaml.tpl")

vars = {
worker_role_arn = join(
"",
distinct(
concat(
data.template_file.launch_template_worker_role_arns.*.rendered,
data.template_file.worker_role_arns.*.rendered,
data.template_file.launch_template_mixed_worker_role_arns.*.rendered,
),
),
)
map_users = join("", data.template_file.map_users.*.rendered)
map_roles = join("", data.template_file.map_roles.*.rendered)
map_accounts = join("", data.template_file.map_accounts.*.rendered)
}
}

data "template_file" "map_users" {
count = "${var.map_users_count}"
template = "${file("${path.module}/templates/config-map-aws-auth-map_users.yaml.tpl")}"

vars {
user_arn = "${lookup(var.map_users[count.index], "user_arn")}"
username = "${lookup(var.map_users[count.index], "username")}"
group = "${lookup(var.map_users[count.index], "group")}"
count = length(var.map_users)
template = file(
"${path.module}/templates/config-map-aws-auth-map_users.yaml.tpl",
)

vars = {
user_arn = var.map_users[count.index]["user_arn"]
username = var.map_users[count.index]["username"]
group = var.map_users[count.index]["group"]
}
}

data "template_file" "map_roles" {
count = "${var.map_roles_count}"
template = "${file("${path.module}/templates/config-map-aws-auth-map_roles.yaml.tpl")}"

vars {
role_arn = "${lookup(var.map_roles[count.index], "role_arn")}"
username = "${lookup(var.map_roles[count.index], "username")}"
group = "${lookup(var.map_roles[count.index], "group")}"
count = length(var.map_roles)
template = file(
"${path.module}/templates/config-map-aws-auth-map_roles.yaml.tpl",
)

vars = {
role_arn = var.map_roles[count.index]["role_arn"]
username = var.map_roles[count.index]["username"]
group = var.map_roles[count.index]["group"]
}
}

data "template_file" "map_accounts" {
count = "${var.map_accounts_count}"
template = "${file("${path.module}/templates/config-map-aws-auth-map_accounts.yaml.tpl")}"
count = length(var.map_accounts)
template = file(
"${path.module}/templates/config-map-aws-auth-map_accounts.yaml.tpl",
)

vars {
account_number = "${element(var.map_accounts, count.index)}"
vars = {
account_number = var.map_accounts[count.index]
}
}

74 changes: 40 additions & 34 deletions cluster.tf
Original file line number Diff line number Diff line change
@@ -1,83 +1,89 @@
resource "aws_cloudwatch_log_group" "this" {
name = "/aws/eks/${var.cluster_name}/cluster"
retention_in_days = "${var.cluster_log_retention_in_days}"
retention_in_days = var.cluster_log_retention_in_days

count = "${length(var.cluster_enabled_log_types) > 0 ? 1 : 0}"
count = length(var.cluster_enabled_log_types) > 0 ? 1 : 0
}

resource "aws_eks_cluster" "this" {
name = "${var.cluster_name}"
enabled_cluster_log_types = "${var.cluster_enabled_log_types}"
role_arn = "${local.cluster_iam_role_arn}"
version = "${var.cluster_version}"
name = var.cluster_name
enabled_cluster_log_types = var.cluster_enabled_log_types
role_arn = local.cluster_iam_role_arn
version = var.cluster_version

vpc_config {
security_group_ids = ["${local.cluster_security_group_id}"]
subnet_ids = ["${var.subnets}"]
endpoint_private_access = "${var.cluster_endpoint_private_access}"
endpoint_public_access = "${var.cluster_endpoint_public_access}"
security_group_ids = [local.cluster_security_group_id]
subnet_ids = var.subnets
endpoint_private_access = var.cluster_endpoint_private_access
endpoint_public_access = var.cluster_endpoint_public_access
}

timeouts {
create = "${var.cluster_create_timeout}"
delete = "${var.cluster_delete_timeout}"
create = var.cluster_create_timeout
delete = var.cluster_delete_timeout
}

depends_on = [
"aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy",
"aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy",
"aws_cloudwatch_log_group.this",
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy,
aws_cloudwatch_log_group.this
]
}

resource "aws_security_group" "cluster" {
count = "${var.cluster_create_security_group ? 1 : 0}"
name_prefix = "${var.cluster_name}"
count = var.cluster_create_security_group ? 1 : 0
name_prefix = var.cluster_name
description = "EKS cluster security group."
vpc_id = "${var.vpc_id}"
tags = "${merge(var.tags, map("Name", "${var.cluster_name}-eks_cluster_sg"))}"
vpc_id = var.vpc_id
tags = merge(
var.tags,
{
"Name" = "${var.cluster_name}-eks_cluster_sg"
},
)
}

resource "aws_security_group_rule" "cluster_egress_internet" {
count = "${var.cluster_create_security_group ? 1 : 0}"
count = var.cluster_create_security_group ? 1 : 0
description = "Allow cluster egress access to the Internet."
protocol = "-1"
security_group_id = "${aws_security_group.cluster.id}"
security_group_id = aws_security_group.cluster[0].id
cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 0
type = "egress"
}

resource "aws_security_group_rule" "cluster_https_worker_ingress" {
count = "${var.cluster_create_security_group ? 1 : 0}"
count = var.cluster_create_security_group ? 1 : 0
description = "Allow pods to communicate with the EKS cluster API."
protocol = "tcp"
security_group_id = "${aws_security_group.cluster.id}"
source_security_group_id = "${local.worker_security_group_id}"
security_group_id = aws_security_group.cluster[0].id
source_security_group_id = local.worker_security_group_id
from_port = 443
to_port = 443
type = "ingress"
}

resource "aws_iam_role" "cluster" {
count = "${var.manage_cluster_iam_resources ? 1 : 0}"
name_prefix = "${var.cluster_name}"
assume_role_policy = "${data.aws_iam_policy_document.cluster_assume_role_policy.json}"
permissions_boundary = "${var.permissions_boundary}"
path = "${var.iam_path}"
count = var.manage_cluster_iam_resources ? 1 : 0
name_prefix = var.cluster_name
assume_role_policy = data.aws_iam_policy_document.cluster_assume_role_policy.json
permissions_boundary = var.permissions_boundary
path = var.iam_path
force_detach_policies = true
tags = "${var.tags}"
tags = var.tags
}

resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
count = "${var.manage_cluster_iam_resources ? 1 : 0}"
count = var.manage_cluster_iam_resources ? 1 : 0
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.cluster.name}"
role = aws_iam_role.cluster[0].name
}

resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSServicePolicy" {
count = "${var.manage_cluster_iam_resources ? 1 : 0}"
count = var.manage_cluster_iam_resources ? 1 : 0
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.cluster.name}"
role = aws_iam_role.cluster[0].name
}

Loading

0 comments on commit da2c78b

Please sign in to comment.