From 11147e9af34054c4c4576aa00938a2c65198ca5f Mon Sep 17 00:00:00 2001 From: Daniel Piddock <33028589+dpiddockcmp@users.noreply.github.com> Date: Thu, 9 Jan 2020 12:53:08 +0100 Subject: [PATCH 1/4] Node groups submodule (#650) * WIP Move node_groups to a submodule * Split the old node_groups file up * Start moving locals * Simplify IAM creation logic * depends_on from the TF docs * Wire in the variables * Call module from parent * Allow to customize the role name. As per workers * aws_auth ConfigMap for node_groups * Get the managed_node_groups example to plan * Get the basic example to plan too * create_eks = false works "The true and false result expressions must have consistent types. The given expressions are object and object, respectively." Well, that's useful. But apparently set(string) and set() are ok. So everything else is more complicated. Thanks. * Update Changelog * Update README * Wire in node_groups_defaults * Remove node_groups from workers_defaults_defaults * Synchronize random and node_group defaults * Error: "name_prefix" cannot be longer than 32 * Update READMEs again * Fix double destroy Was producing index errors when running destroy on an empty state. * Remove duplicate iam_role in node_group I think this logic works. Needs some testing with an externally created role. * Fix index fail if node group manually deleted * Keep aws_auth template in top module Downside: count causes issues as usual: can't use distinct() in the child module so there's a template render for every node_group even if only one role is really in use. Hopefully just output noise instead of technical issue * Hack to have node_groups depend on aws_auth etc The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes can join the cluster. This breaks the kubernetes resource which cannot do a force create. Remove the race condition with explicit depend. Can't pull the IAM role out of the node_group any more. * Pull variables via the random_pet to cut logic No point having the same logic in two different places * Pass all ForceNew variables through the pet * Do a deep merge of NG labels and tags * Update README.. again * Additional managed node outputs #644 Add change from @TBeijin from PR #644 * Remove unused local * Use more for_each * Remove the change when create_eks = false * Make documentation less confusing * node_group version user configurable * Pass through raw output from aws_eks_node_groups * Merge workers defaults in the locals This simplifies the random_pet and aws_eks_node_group logic. Which was causing much consernation on the PR. * Fix typo Co-authored-by: Max Williams --- CHANGELOG.md | 2 + README.md | 5 +- aws_auth.tf | 7 +- examples/managed_node_groups/main.tf | 26 ++--- examples/managed_node_groups/outputs.tf | 4 + local.tf | 17 +-- modules/node_groups/README.md | 55 ++++++++++ modules/node_groups/locals.tf | 16 +++ modules/node_groups/node_groups.tf | 49 +++++++++ modules/node_groups/outputs.tf | 14 +++ modules/node_groups/random.tf | 21 ++++ modules/node_groups/variables.tf | 36 +++++++ node_groups.tf | 133 +++++------------------- outputs.tf | 9 +- variables.tf | 10 +- 15 files changed, 254 insertions(+), 150 deletions(-) create mode 100644 modules/node_groups/README.md create mode 100644 modules/node_groups/locals.tf create mode 100644 modules/node_groups/node_groups.tf create mode 100644 modules/node_groups/outputs.tf create mode 100644 modules/node_groups/random.tf create mode 100644 modules/node_groups/variables.tf diff --git a/CHANGELOG.md b/CHANGELOG.md index c3c19fd736..f1e6b00180 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -26,6 +26,8 @@ project adheres to [Semantic Versioning](http://semver.org/). - Adding node group iam role arns to outputs. (by @mukgupta) - Added the OIDC Provider ARN to outputs. (by @eytanhanig) - **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi) +- Move `eks_node_group` resources to a submodule (by @dpiddockcmp) +- Add complex output `node_groups` (by @TBeijen) #### Important notes diff --git a/README.md b/README.md index 059e13294a..05dbed13c9 100644 --- a/README.md +++ b/README.md @@ -181,7 +181,8 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a | map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(string) | `[]` | no | | map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no | | map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no | -| node\_groups | A list of maps defining node group configurations to be defined using AWS EKS Managed Node Groups. See workers_group_defaults for valid keys. | any | `[]` | no | +| node\_groups | Map of map of node groups to create. See `node_groups` module's documentation for more details | any | `{}` | no | +| node\_groups\_defaults | Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details | any | `{}` | no | | permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string | `"null"` | no | | subnets | A list of subnets to place the EKS cluster and workers within. | list(string) | n/a | yes | | tags | A map of tags to add to all resources. | map(string) | `{}` | no | @@ -218,7 +219,7 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a | config\_map\_aws\_auth | A kubernetes configuration to authenticate to this EKS cluster. | | kubeconfig | kubectl config file contents for this EKS cluster. | | kubeconfig\_filename | The filename of the generated kubectl config. | -| node\_groups\_iam\_role\_arns | IAM role ARNs for EKS node groups | +| node\_groups | Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys | | oidc\_provider\_arn | The ARN of the OIDC Provider if `enable_irsa = true`. | | worker\_autoscaling\_policy\_arn | ARN of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` | | worker\_autoscaling\_policy\_name | Name of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` | diff --git a/aws_auth.tf b/aws_auth.tf index a2a25ec134..cce8f667c4 100644 --- a/aws_auth.tf +++ b/aws_auth.tf @@ -43,13 +43,10 @@ data "template_file" "worker_role_arns" { } data "template_file" "node_group_arns" { - count = var.create_eks ? local.worker_group_managed_node_group_count : 0 + count = var.create_eks ? length(module.node_groups.aws_auth_roles) : 0 template = file("${path.module}/templates/worker-role.tpl") - vars = { - worker_role_arn = lookup(var.node_groups[count.index], "iam_role_arn", aws_iam_role.node_groups[0].arn) - platform = "linux" # Hardcoded because the EKS API currently only supports linux for managed node groups - } + vars = module.node_groups.aws_auth_roles[count.index] } resource "kubernetes_config_map" "aws_auth" { diff --git a/examples/managed_node_groups/main.tf b/examples/managed_node_groups/main.tf index 4dd84e7a87..db55e7d220 100644 --- a/examples/managed_node_groups/main.tf +++ b/examples/managed_node_groups/main.tf @@ -92,27 +92,29 @@ module "eks" { vpc_id = module.vpc.vpc_id - node_groups = [ - { - name = "example" + node_groups_defaults = { + ami_type = "AL2_x86_64" + disk_size = 50 + } - node_group_desired_capacity = 1 - node_group_max_capacity = 10 - node_group_min_capacity = 1 + node_groups = { + example = { + desired_capacity = 1 + max_capacity = 10 + min_capacity = 1 instance_type = "m5.large" - node_group_k8s_labels = { + k8s_labels = { Environment = "test" GithubRepo = "terraform-aws-eks" GithubOrg = "terraform-aws-modules" } - node_group_additional_tags = { - Environment = "test" - GithubRepo = "terraform-aws-eks" - GithubOrg = "terraform-aws-modules" + additional_tags = { + ExtraTag = "example" } } - ] + defaults = {} + } map_roles = var.map_roles map_users = var.map_users diff --git a/examples/managed_node_groups/outputs.tf b/examples/managed_node_groups/outputs.tf index a0788aff1d..7010db294f 100644 --- a/examples/managed_node_groups/outputs.tf +++ b/examples/managed_node_groups/outputs.tf @@ -23,3 +23,7 @@ output "region" { value = var.region } +output "node_groups" { + description = "Outputs from node groups" + value = module.eks.node_groups +} diff --git a/local.tf b/local.tf index 1a604dfa67..609185816f 100644 --- a/local.tf +++ b/local.tf @@ -16,9 +16,8 @@ locals { default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0] kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name - worker_group_count = length(var.worker_groups) - worker_group_launch_template_count = length(var.worker_groups_launch_template) - worker_group_managed_node_group_count = length(var.node_groups) + worker_group_count = length(var.worker_groups) + worker_group_launch_template_count = length(var.worker_groups_launch_template) default_ami_id_linux = data.aws_ami.eks_worker.id default_ami_id_windows = data.aws_ami.eks_worker_windows.id @@ -80,15 +79,6 @@ locals { spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity. spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify." spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price - ami_type = "AL2_x86_64" # AMI Type to use for the Managed Node Groups. Can be either: AL2_x86_64 or AL2_x86_64_GPU - ami_release_version = "" # AMI Release Version of the Managed Node Groups - source_security_group_id = [] # Source Security Group IDs to allow SSH Access to the Nodes. NOTE: IF LEFT BLANK, AND A KEY IS SPECIFIED, THE SSH PORT WILL BE OPENNED TO THE WORLD - node_group_k8s_labels = {} # Kubernetes Labels to apply to the nodes within the Managed Node Group - node_group_desired_capacity = 1 # Desired capacity of the Node Group - node_group_min_capacity = 1 # Min capacity of the Node Group (Minimum value allowed is 1) - node_group_max_capacity = 3 # Max capacity of the Node Group - node_group_iam_role_arn = "" # IAM role to use for Managed Node Groups instead of default one created by the automation - node_group_additional_tags = {} # Additional tags to be applied to the Node Groups } workers_group_defaults = merge( @@ -133,7 +123,4 @@ locals { "t2.small", "t2.xlarge" ] - - node_groups = { for node_group in var.node_groups : node_group["name"] => node_group } - } diff --git a/modules/node_groups/README.md b/modules/node_groups/README.md new file mode 100644 index 0000000000..76278d9dc4 --- /dev/null +++ b/modules/node_groups/README.md @@ -0,0 +1,55 @@ +# eks `node_groups` submodule + +Helper submodule to create and manage resources related to `eks_node_groups`. + +## Assumptions +* Designed for use by the parent module and not directly by end users + +## Node Groups' IAM Role +The role ARN specified in `var.default_iam_role_arn` will be used by default. In a simple configuration this will be the worker role created by the parent module. + +`iam_role_arn` must be specified in either `var.node_groups_defaults` or `var.node_groups` if the default parent IAM role is not being created for whatever reason, for example if `manage_worker_iam_resources` is set to false in the parent. + +## `node_groups` and `node_groups_defaults` keys +`node_groups_defaults` is a map that can take the below keys. Values will be used if not specified in individual node groups. + +`node_groups` is a map of maps. Key of first level will be used as unique value for `for_each` resources and in the `aws_eks_node_group` name. Inner map can take the below values. + +| Name | Description | Type | If unset | +|------|-------------|:----:|:-----:| +| additional\_tags | Additional tags to apply to node group | map(string) | Only `var.tags` applied | +| ami\_release\_version | AMI version of workers | string | Provider default behavior | +| ami\_type | AMI Type. See Terraform or AWS docs | string | Provider default behavior | +| desired\_capacity | Desired number of workers | number | `var.workers_group_defaults[asg_desired_capacity]` | +| disk\_size | Workers' disk size | number | Provider default behavior | +| iam\_role\_arn | IAM role ARN for workers | string | `var.default_iam_role_arn` | +| instance\_type | Workers' instance type | string | `var.workers_group_defaults[instance_type]` | +| k8s\_labels | Kubernetes labels | map(string) | No labels applied | +| key\_name | Key name for workers. Set to empty string to disable remote access | string | `var.workers_group_defaults[key_name]` | +| max\_capacity | Max number of workers | number | `var.workers_group_defaults[asg_max_size]` | +| min\_capacity | Min number of workers | number | `var.workers_group_defaults[asg_min_size]` | +| source\_security\_group\_ids | Source security groups for remote access to workers | list(string) | If key\_name is specified: THE REMOTE ACCESS WILL BE OPENED TO THE WORLD | +| subnets | Subnets to contain workers | list(string) | `var.workers_group_defaults[subnets]` | +| version | Kubernetes version | string | Provider default behavior | + + +## Inputs + +| Name | Description | Type | Default | Required | +|------|-------------|:----:|:-----:|:-----:| +| cluster\_name | Name of parent cluster | string | n/a | yes | +| create\_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no | +| default\_iam\_role\_arn | ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults` | string | n/a | yes | +| node\_groups | Map of maps of `eks_node_groups` to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | `{}` | no | +| node\_groups\_defaults | map of maps of node groups to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | n/a | yes | +| tags | A map of tags to add to all resources | map(string) | n/a | yes | +| workers\_group\_defaults | Workers group defaults from parent | any | n/a | yes | + +## Outputs + +| Name | Description | +|------|-------------| +| aws\_auth\_roles | Roles for use in aws-auth ConfigMap | +| node\_groups | Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values | + + diff --git a/modules/node_groups/locals.tf b/modules/node_groups/locals.tf new file mode 100644 index 0000000000..43cf672ca0 --- /dev/null +++ b/modules/node_groups/locals.tf @@ -0,0 +1,16 @@ +locals { + # Merge defaults and per-group values to make code cleaner + node_groups_expanded = { for k, v in var.node_groups : k => merge( + { + desired_capacity = var.workers_group_defaults["asg_desired_capacity"] + iam_role_arn = var.default_iam_role_arn + instance_type = var.workers_group_defaults["instance_type"] + key_name = var.workers_group_defaults["key_name"] + max_capacity = var.workers_group_defaults["asg_max_size"] + min_capacity = var.workers_group_defaults["asg_min_size"] + subnets = var.workers_group_defaults["subnets"] + }, + var.node_groups_defaults, + v, + ) if var.create_eks } +} diff --git a/modules/node_groups/node_groups.tf b/modules/node_groups/node_groups.tf new file mode 100644 index 0000000000..cdbc6d00b3 --- /dev/null +++ b/modules/node_groups/node_groups.tf @@ -0,0 +1,49 @@ +resource "aws_eks_node_group" "workers" { + for_each = local.node_groups_expanded + + node_group_name = join("-", [var.cluster_name, each.key, random_pet.node_groups[each.key].id]) + + cluster_name = var.cluster_name + node_role_arn = each.value["iam_role_arn"] + subnet_ids = each.value["subnets"] + + scaling_config { + desired_size = each.value["desired_capacity"] + max_size = each.value["max_capacity"] + min_size = each.value["min_capacity"] + } + + ami_type = lookup(each.value, "ami_type", null) + disk_size = lookup(each.value, "disk_size", null) + instance_types = [each.value["instance_type"]] + release_version = lookup(each.value, "ami_release_version", null) + + dynamic "remote_access" { + for_each = each.value["key_name"] != "" ? [{ + ec2_ssh_key = each.value["key_name"] + source_security_group_ids = lookup(each.value, "source_security_group_ids", []) + }] : [] + + content { + ec2_ssh_key = remote_access.value["ec2_ssh_key"] + source_security_group_ids = remote_access.value["source_security_group_ids"] + } + } + + version = lookup(each.value, "version", null) + + labels = merge( + lookup(var.node_groups_defaults, "k8s_labels", {}), + lookup(var.node_groups[each.key], "k8s_labels", {}) + ) + + tags = merge( + var.tags, + lookup(var.node_groups_defaults, "additional_tags", {}), + lookup(var.node_groups[each.key], "additional_tags", {}), + ) + + lifecycle { + create_before_destroy = true + } +} diff --git a/modules/node_groups/outputs.tf b/modules/node_groups/outputs.tf new file mode 100644 index 0000000000..ad148ea514 --- /dev/null +++ b/modules/node_groups/outputs.tf @@ -0,0 +1,14 @@ +output "node_groups" { + description = "Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values" + value = aws_eks_node_group.workers +} + +output "aws_auth_roles" { + description = "Roles for use in aws-auth ConfigMap" + value = [ + for k, v in local.node_groups_expanded : { + worker_role_arn = lookup(v, "iam_role_arn", var.default_iam_role_arn) + platform = "linux" + } + ] +} diff --git a/modules/node_groups/random.tf b/modules/node_groups/random.tf new file mode 100644 index 0000000000..14e7ba2bce --- /dev/null +++ b/modules/node_groups/random.tf @@ -0,0 +1,21 @@ +resource "random_pet" "node_groups" { + for_each = local.node_groups_expanded + + separator = "-" + length = 2 + + keepers = { + ami_type = lookup(each.value, "ami_type", null) + disk_size = lookup(each.value, "disk_size", null) + instance_type = each.value["instance_type"] + iam_role_arn = each.value["iam_role_arn"] + + key_name = each.value["key_name"] + + source_security_group_ids = join("|", compact( + lookup(each.value, "source_security_group_ids", []) + )) + subnet_ids = join("|", each.value["subnets"]) + node_group_name = join("-", [var.cluster_name, each.key]) + } +} diff --git a/modules/node_groups/variables.tf b/modules/node_groups/variables.tf new file mode 100644 index 0000000000..c0eaa23d1e --- /dev/null +++ b/modules/node_groups/variables.tf @@ -0,0 +1,36 @@ +variable "create_eks" { + description = "Controls if EKS resources should be created (it affects almost all resources)" + type = bool + default = true +} + +variable "cluster_name" { + description = "Name of parent cluster" + type = string +} + +variable "default_iam_role_arn" { + description = "ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults`" + type = string +} + +variable "workers_group_defaults" { + description = "Workers group defaults from parent" + type = any +} + +variable "tags" { + description = "A map of tags to add to all resources" + type = map(string) +} + +variable "node_groups_defaults" { + description = "map of maps of node groups to create. See \"`node_groups` and `node_groups_defaults` keys\" section in README.md for more details" + type = any +} + +variable "node_groups" { + description = "Map of maps of `eks_node_groups` to create. See \"`node_groups` and `node_groups_defaults` keys\" section in README.md for more details" + type = any + default = {} +} diff --git a/node_groups.tf b/node_groups.tf index eb2f4c310b..6c7b438cfb 100644 --- a/node_groups.tf +++ b/node_groups.tf @@ -1,112 +1,29 @@ -resource "aws_iam_role" "node_groups" { - count = var.create_eks && local.worker_group_managed_node_group_count > 0 ? 1 : 0 - name = "${var.workers_role_name != "" ? var.workers_role_name : aws_eks_cluster.this[0].name}-managed-node-groups" - assume_role_policy = data.aws_iam_policy_document.workers_assume_role_policy.json - permissions_boundary = var.permissions_boundary - path = var.iam_path - force_detach_policies = true - tags = var.tags -} - -resource "aws_iam_role_policy_attachment" "node_groups_AmazonEKSWorkerNodePolicy" { - count = var.create_eks && local.worker_group_managed_node_group_count > 0 ? 1 : 0 - policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy" - role = aws_iam_role.node_groups[0].name -} - -resource "aws_iam_role_policy_attachment" "node_groups_AmazonEKS_CNI_Policy" { - count = var.create_eks && local.worker_group_managed_node_group_count > 0 ? 1 : 0 - policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy" - role = aws_iam_role.node_groups[0].name -} - -resource "aws_iam_role_policy_attachment" "node_groups_AmazonEC2ContainerRegistryReadOnly" { - count = var.create_eks && local.worker_group_managed_node_group_count > 0 ? 1 : 0 - policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" - role = aws_iam_role.node_groups[0].name -} - -resource "aws_iam_role_policy_attachment" "node_groups_additional_policies" { - for_each = var.create_eks && local.worker_group_managed_node_group_count > 0 ? toset(var.workers_additional_policies) : [] - - role = aws_iam_role.node_groups[0].name - policy_arn = each.key -} - -resource "aws_iam_role_policy_attachment" "node_groups_autoscaling" { - count = var.create_eks && var.manage_worker_autoscaling_policy && var.attach_worker_autoscaling_policy && local.worker_group_managed_node_group_count > 0 ? 1 : 0 - policy_arn = aws_iam_policy.node_groups_autoscaling[0].arn - role = aws_iam_role.node_groups[0].name -} - -resource "aws_iam_policy" "node_groups_autoscaling" { - count = var.create_eks && var.manage_worker_autoscaling_policy && local.worker_group_managed_node_group_count > 0 ? 1 : 0 - name_prefix = "eks-worker-autoscaling-${aws_eks_cluster.this[0].name}" - description = "EKS worker node autoscaling policy for cluster ${aws_eks_cluster.this[0].name}" - policy = data.aws_iam_policy_document.worker_autoscaling[0].json - path = var.iam_path -} - -resource "random_pet" "node_groups" { - for_each = var.create_eks ? local.node_groups : {} - - separator = "-" - length = 2 - - keepers = { - instance_type = lookup(each.value, "instance_type", local.workers_group_defaults["instance_type"]) - - ec2_ssh_key = lookup(each.value, "key_name", local.workers_group_defaults["key_name"]) - - source_security_group_ids = join("-", compact( - lookup(each.value, "source_security_group_ids", local.workers_group_defaults["source_security_group_id"] - ))) - - node_group_name = join("-", [var.cluster_name, each.value["name"]]) +# Hack to ensure ordering of resource creation. Do not create node_groups +# before other resources are ready. Removes race conditions +data "null_data_source" "node_groups" { + count = var.create_eks ? 1 : 0 + + inputs = { + cluster_name = var.cluster_name + + # Ensure these resources are created before "unlocking" the data source. + # `depends_on` causes a refresh on every run so is useless here. + # [Re]creating or removing these resources will trigger recreation of Node Group resources + aws_auth = coalescelist(kubernetes_config_map.aws_auth[*].id, [""])[0] + role_NodePolicy = coalescelist(aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[*].id, [""])[0] + role_CNI_Policy = coalescelist(aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[*].id, [""])[0] + role_Container = coalescelist(aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[*].id, [""])[0] + role_autoscaling = coalescelist(aws_iam_role_policy_attachment.workers_autoscaling[*].id, [""])[0] } } -resource "aws_eks_node_group" "workers" { - for_each = var.create_eks ? local.node_groups : {} - - node_group_name = join("-", [var.cluster_name, each.key, random_pet.node_groups[each.key].id]) - - cluster_name = var.cluster_name - node_role_arn = lookup(each.value, "iam_role_arn", aws_iam_role.node_groups[0].arn) - subnet_ids = lookup(each.value, "subnets", local.workers_group_defaults["subnets"]) - - scaling_config { - desired_size = lookup(each.value, "node_group_desired_capacity", local.workers_group_defaults["asg_desired_capacity"]) - max_size = lookup(each.value, "node_group_max_capacity", local.workers_group_defaults["asg_max_size"]) - min_size = lookup(each.value, "node_group_min_capacity", local.workers_group_defaults["asg_min_size"]) - } - - ami_type = lookup(each.value, "ami_type", null) - disk_size = lookup(each.value, "root_volume_size", null) - instance_types = [lookup(each.value, "instance_type", null)] - labels = lookup(each.value, "node_group_k8s_labels", null) - release_version = lookup(each.value, "ami_release_version", null) - - dynamic "remote_access" { - for_each = [ - for node_group in [each.value] : { - ec2_ssh_key = node_group["key_name"] - source_security_group_ids = lookup(node_group, "source_security_group_ids", []) - } - if lookup(node_group, "key_name", "") != "" - ] - - content { - ec2_ssh_key = remote_access.value["ec2_ssh_key"] - source_security_group_ids = remote_access.value["source_security_group_ids"] - } - } - - version = aws_eks_cluster.this[0].version - - tags = lookup(each.value, "node_group_additional_tags", null) - - lifecycle { - create_before_destroy = true - } +module "node_groups" { + source = "./modules/node_groups" + create_eks = var.create_eks + cluster_name = coalescelist(data.null_data_source.node_groups[*].outputs["cluster_name"], [""])[0] + default_iam_role_arn = coalescelist(aws_iam_role.workers[*].arn, [""])[0] + workers_group_defaults = local.workers_group_defaults + tags = var.tags + node_groups_defaults = var.node_groups_defaults + node_groups = var.node_groups } diff --git a/outputs.tf b/outputs.tf index 34e0064779..e72b29457e 100644 --- a/outputs.tf +++ b/outputs.tf @@ -163,10 +163,7 @@ output "worker_autoscaling_policy_arn" { value = concat(aws_iam_policy.worker_autoscaling[*].arn, [""])[0] } -output "node_groups_iam_role_arns" { - description = "IAM role ARNs for EKS node groups" - value = { - for node_group in aws_eks_node_group.workers : - node_group.node_group_name => node_group.node_role_arn - } +output "node_groups" { + description = "Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys" + value = module.node_groups.node_groups } diff --git a/variables.tf b/variables.tf index f04b493f03..2b64a9a28a 100644 --- a/variables.tf +++ b/variables.tf @@ -282,10 +282,16 @@ variable "create_eks" { default = true } +variable "node_groups_defaults" { + description = "Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details" + type = any + default = {} +} + variable "node_groups" { - description = "A list of maps defining node group configurations to be defined using AWS EKS Managed Node Groups. See workers_group_defaults for valid keys." + description = "Map of map of node groups to create. See `node_groups` module's documentation for more details" type = any - default = [] + default = {} } variable "enable_irsa" { From a9db852d44395391c2e110ceeb5aadbf009be39c Mon Sep 17 00:00:00 2001 From: Max Williams Date: Thu, 9 Jan 2020 14:10:47 +0100 Subject: [PATCH 2/4] Release 8.0.0 (#662) * Release 8.0.0 * Update changelog * remove 'defauls' node group * Make curl silent --- CHANGELOG.md | 12 +++++++++--- cluster.tf | 2 +- examples/managed_node_groups/main.tf | 1 - version | 2 +- 4 files changed, 11 insertions(+), 6 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index f1e6b00180..34c1538c35 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,13 +7,20 @@ project adheres to [Semantic Versioning](http://semver.org/). ## Next release -## [[v8.?.?](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v7.0.0...HEAD)] - 2019-??-??] +## [[v8.?.?](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v8.0.0...HEAD)] - 2019-12-11] +- Write your awesome change here (by @you) + +# History + +## [[v8.0.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v8.0.0...v7.0.1)] - 2019-12-11] + +- **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi) +- **Breaking:** Configure the aws-auth configmap using the terraform kubernetes providers. See Important notes below for upgrade notes (by @sdehaes) - Wait for cluster to respond to kubectl before applying auth map_config (@shaunc) - Added flag `create_eks` to conditionally create resources (by @syst0m / @tbeijen) - Support for AWS EKS Managed Node Groups. (by @wmorgan6796) - Added a if check on `aws-auth` configmap when `map_roles` is empty (by @shanmugakarna) -- **Breaking:** Configure the aws-auth configmap using the terraform kubernetes providers. See Important notes below for upgrade notes (by @sdehaes) - Removed no longer used variable `write_aws_auth_config` (by @tbeijen) - Exit with error code when `aws-auth` configmap is unable to be updated (by @knittingdev) - Fix deprecated interpolation-only expression (by @angelabad) @@ -25,7 +32,6 @@ project adheres to [Semantic Versioning](http://semver.org/). - Added support to create IAM OpenID Connect Identity Provider to enable EKS Identity Roles for Service Accounts (IRSA). (by @alaa) - Adding node group iam role arns to outputs. (by @mukgupta) - Added the OIDC Provider ARN to outputs. (by @eytanhanig) -- **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi) - Move `eks_node_group` resources to a submodule (by @dpiddockcmp) - Add complex output `node_groups` (by @TBeijen) diff --git a/cluster.tf b/cluster.tf index 86ff69a29a..764c737ff0 100644 --- a/cluster.tf +++ b/cluster.tf @@ -33,7 +33,7 @@ resource "aws_eks_cluster" "this" { ] provisioner "local-exec" { command = </dev/null; do sleep 4; done + until curl -k -s ${aws_eks_cluster.this[0].endpoint}/healthz >/dev/null; do sleep 4; done EOT } } diff --git a/examples/managed_node_groups/main.tf b/examples/managed_node_groups/main.tf index db55e7d220..c31abb36b5 100644 --- a/examples/managed_node_groups/main.tf +++ b/examples/managed_node_groups/main.tf @@ -113,7 +113,6 @@ module "eks" { ExtraTag = "example" } } - defaults = {} } map_roles = var.map_roles diff --git a/version b/version index f8ba35d676..5f4f91fb4f 100644 --- a/version +++ b/version @@ -1 +1 @@ -v7.0.1 +v8.0.0 From 82aefb20f5dc1e4d195a3d19d6c091e1ed405932 Mon Sep 17 00:00:00 2001 From: Siddarth Prakash <1428486+sidprak@users.noreply.github.com> Date: Thu, 9 Jan 2020 18:53:33 -0500 Subject: [PATCH 3/4] Add public access endpoint CIDRs option (terraform-aws-eks#647) (#673) * Add public access endpoint CIDRs option (terraform-aws-eks#647) * Update required provider version to 2.44.0 * Fix formatting in docs --- CHANGELOG.md | 1 + README.md | 1 + cluster.tf | 1 + variables.tf | 6 ++++++ versions.tf | 2 +- 5 files changed, 10 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 34c1538c35..63cf796f17 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,7 @@ project adheres to [Semantic Versioning](http://semver.org/). ## [[v8.?.?](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v8.0.0...HEAD)] - 2019-12-11] - Write your awesome change here (by @you) +- Add support for restricting access to the public API endpoint (@sidprak) # History diff --git a/README.md b/README.md index 05dbed13c9..b22f5739e5 100644 --- a/README.md +++ b/README.md @@ -157,6 +157,7 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a | cluster\_enabled\_log\_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | list(string) | `[]` | no | | cluster\_endpoint\_private\_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. | bool | `"false"` | no | | cluster\_endpoint\_public\_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. | bool | `"true"` | no | +| cluster\_endpoint\_public\_access\_cidrs | List of CIDR blocks which can access the Amazon EKS public API server endpoint. | list(string) | `[ "0.0.0.0/0" ]` | no | | cluster\_iam\_role\_name | IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false. | string | `""` | no | | cluster\_log\_kms\_key\_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | string | `""` | no | | cluster\_log\_retention\_in\_days | Number of days to retain log events. Default retention - 90 days. | number | `"90"` | no | diff --git a/cluster.tf b/cluster.tf index 764c737ff0..877ddda5c7 100644 --- a/cluster.tf +++ b/cluster.tf @@ -19,6 +19,7 @@ resource "aws_eks_cluster" "this" { subnet_ids = var.subnets endpoint_private_access = var.cluster_endpoint_private_access endpoint_public_access = var.cluster_endpoint_public_access + public_access_cidrs = var.cluster_endpoint_public_access_cidrs } timeouts { diff --git a/variables.tf b/variables.tf index 2b64a9a28a..92b906eadc 100644 --- a/variables.tf +++ b/variables.tf @@ -234,6 +234,12 @@ variable "cluster_endpoint_public_access" { default = true } +variable "cluster_endpoint_public_access_cidrs" { + description = "List of CIDR blocks which can access the Amazon EKS public API server endpoint." + type = list(string) + default = ["0.0.0.0/0"] +} + variable "manage_cluster_iam_resources" { description = "Whether to let the module manage cluster IAM resources. If set to false, cluster_iam_role_name must be specified." type = bool diff --git a/versions.tf b/versions.tf index e95ea3e9d3..95fb1ef19e 100644 --- a/versions.tf +++ b/versions.tf @@ -2,7 +2,7 @@ terraform { required_version = ">= 0.12.9" required_providers { - aws = ">= 2.38.0" + aws = ">= 2.44.0" local = ">= 1.2" null = ">= 2.1" template = ">= 2.1" From c5f50d59692c14a2133c4d4a2f961d87e9c78e35 Mon Sep 17 00:00:00 2001 From: "Thierno IB. BARRY" Date: Mon, 13 Jan 2020 14:39:59 +0100 Subject: [PATCH 4/4] Re-generate docs with terraform-docs 0.7.0 and bump pre-commit-terraform version (#668) * re-generate docs with terraform-docs 0.7.0 * bump pre-commit-terraform version --- .pre-commit-config.yaml | 4 +- README.md | 156 +++++++++++++++++----------------- modules/node_groups/README.md | 16 ++-- 3 files changed, 89 insertions(+), 87 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index f21c517082..e73233f2ba 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -1,7 +1,9 @@ +repos: - repo: git://github.com/antonbabenko/pre-commit-terraform - rev: v1.19.0 + rev: v1.22.0 hooks: - id: terraform_fmt - id: terraform_docs + args: [--args=--with-aggregate-type-defaults --no-escape] - id: terraform_validate - id: terraform_tflint diff --git a/README.md b/README.md index b22f5739e5..f1b2c79c1b 100644 --- a/README.md +++ b/README.md @@ -150,91 +150,91 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a | Name | Description | Type | Default | Required | |------|-------------|:----:|:-----:|:-----:| -| attach\_worker\_autoscaling\_policy | Whether to attach the module managed cluster autoscaling iam policy to the default worker IAM role. This requires `manage_worker_autoscaling_policy = true` | bool | `"true"` | no | -| attach\_worker\_cni\_policy | Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default worker IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-node` DaemonSet pods via another method or nodes will not be able to join the cluster. | bool | `"true"` | no | -| cluster\_create\_timeout | Timeout value when creating the EKS cluster. | string | `"15m"` | no | -| cluster\_delete\_timeout | Timeout value when deleting the EKS cluster. | string | `"15m"` | no | -| cluster\_enabled\_log\_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | list(string) | `[]` | no | -| cluster\_endpoint\_private\_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. | bool | `"false"` | no | -| cluster\_endpoint\_public\_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. | bool | `"true"` | no | -| cluster\_endpoint\_public\_access\_cidrs | List of CIDR blocks which can access the Amazon EKS public API server endpoint. | list(string) | `[ "0.0.0.0/0" ]` | no | -| cluster\_iam\_role\_name | IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false. | string | `""` | no | -| cluster\_log\_kms\_key\_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | string | `""` | no | -| cluster\_log\_retention\_in\_days | Number of days to retain log events. Default retention - 90 days. | number | `"90"` | no | -| cluster\_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string | n/a | yes | -| cluster\_security\_group\_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers | string | `""` | no | -| cluster\_version | Kubernetes version to use for the EKS cluster. | string | `"1.14"` | no | -| config\_output\_path | Where to save the Kubectl config file (if `write_kubeconfig = true`). Assumed to be a directory if the value ends with a forward slash `/`. | string | `"./"` | no | -| create\_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no | -| eks\_oidc\_root\_ca\_thumbprint | Thumbprint of Root CA for EKS OIDC, Valid until 2037 | string | `"9e99a48a9960b14926bb7f3b02e22da2b0ab7280"` | no | -| enable\_irsa | Whether to create OpenID Connect Provider for EKS to enable IRSA | bool | `"false"` | no | -| iam\_path | If provided, all IAM roles will be created on this path. | string | `"/"` | no | -| kubeconfig\_aws\_authenticator\_additional\_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list(string) | `[]` | no | -| kubeconfig\_aws\_authenticator\_command | Command to use to fetch AWS EKS credentials. | string | `"aws-iam-authenticator"` | no | -| kubeconfig\_aws\_authenticator\_command\_args | Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name]. | list(string) | `[]` | no | -| kubeconfig\_aws\_authenticator\_env\_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map(string) | `{}` | no | -| kubeconfig\_name | Override the default name used for items kubeconfig. | string | `""` | no | -| local\_exec\_interpreter | Command to run for local-exec resources. Must be a shell-style interpreter. If you are on Windows Git Bash is a good choice. | list(string) | `[ "/bin/sh", "-c" ]` | no | -| manage\_aws\_auth | Whether to apply the aws-auth configmap file. | string | `"true"` | no | -| manage\_cluster\_iam\_resources | Whether to let the module manage cluster IAM resources. If set to false, cluster_iam_role_name must be specified. | bool | `"true"` | no | -| manage\_worker\_autoscaling\_policy | Whether to let the module manage the cluster autoscaling iam policy. | bool | `"true"` | no | -| manage\_worker\_iam\_resources | Whether to let the module manage worker IAM resources. If set to false, iam_instance_profile_name must be specified for workers. | bool | `"true"` | no | -| map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(string) | `[]` | no | -| map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no | -| map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no | -| node\_groups | Map of map of node groups to create. See `node_groups` module's documentation for more details | any | `{}` | no | -| node\_groups\_defaults | Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details | any | `{}` | no | -| permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string | `"null"` | no | +| attach_worker_autoscaling_policy | Whether to attach the module managed cluster autoscaling iam policy to the default worker IAM role. This requires `manage_worker_autoscaling_policy = true` | bool | `"true"` | no | +| attach_worker_cni_policy | Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default worker IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-node` DaemonSet pods via another method or nodes will not be able to join the cluster. | bool | `"true"` | no | +| cluster_create_timeout | Timeout value when creating the EKS cluster. | string | `"15m"` | no | +| cluster_delete_timeout | Timeout value when deleting the EKS cluster. | string | `"15m"` | no | +| cluster_enabled_log_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | list(string) | `[]` | no | +| cluster_endpoint_private_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. | bool | `"false"` | no | +| cluster_endpoint_public_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. | bool | `"true"` | no | +| cluster_endpoint_public_access_cidrs | List of CIDR blocks which can access the Amazon EKS public API server endpoint. | list(string) | `[ "0.0.0.0/0" ]` | no | +| cluster_iam_role_name | IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false. | string | `""` | no | +| cluster_log_kms_key_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | string | `""` | no | +| cluster_log_retention_in_days | Number of days to retain log events. Default retention - 90 days. | number | `"90"` | no | +| cluster_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string | n/a | yes | +| cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers | string | `""` | no | +| cluster_version | Kubernetes version to use for the EKS cluster. | string | `"1.14"` | no | +| config_output_path | Where to save the Kubectl config file (if `write_kubeconfig = true`). Assumed to be a directory if the value ends with a forward slash `/`. | string | `"./"` | no | +| create_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no | +| eks_oidc_root_ca_thumbprint | Thumbprint of Root CA for EKS OIDC, Valid until 2037 | string | `"9e99a48a9960b14926bb7f3b02e22da2b0ab7280"` | no | +| enable_irsa | Whether to create OpenID Connect Provider for EKS to enable IRSA | bool | `"false"` | no | +| iam_path | If provided, all IAM roles will be created on this path. | string | `"/"` | no | +| kubeconfig_aws_authenticator_additional_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list(string) | `[]` | no | +| kubeconfig_aws_authenticator_command | Command to use to fetch AWS EKS credentials. | string | `"aws-iam-authenticator"` | no | +| kubeconfig_aws_authenticator_command_args | Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name]. | list(string) | `[]` | no | +| kubeconfig_aws_authenticator_env_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map(string) | `{}` | no | +| kubeconfig_name | Override the default name used for items kubeconfig. | string | `""` | no | +| local_exec_interpreter | Command to run for local-exec resources. Must be a shell-style interpreter. If you are on Windows Git Bash is a good choice. | list(string) | `[ "/bin/sh", "-c" ]` | no | +| manage_aws_auth | Whether to apply the aws-auth configmap file. | string | `"true"` | no | +| manage_cluster_iam_resources | Whether to let the module manage cluster IAM resources. If set to false, cluster_iam_role_name must be specified. | bool | `"true"` | no | +| manage_worker_autoscaling_policy | Whether to let the module manage the cluster autoscaling iam policy. | bool | `"true"` | no | +| manage_worker_iam_resources | Whether to let the module manage worker IAM resources. If set to false, iam_instance_profile_name must be specified for workers. | bool | `"true"` | no | +| map_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(string) | `[]` | no | +| map_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no | +| map_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no | +| node_groups | Map of map of node groups to create. See `node_groups` module's documentation for more details | any | `{}` | no | +| node_groups_defaults | Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details | any | `{}` | no | +| permissions_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string | `"null"` | no | | subnets | A list of subnets to place the EKS cluster and workers within. | list(string) | n/a | yes | | tags | A map of tags to add to all resources. | map(string) | `{}` | no | -| vpc\_id | VPC where the cluster and workers will be deployed. | string | n/a | yes | -| worker\_additional\_security\_group\_ids | A list of additional security group ids to attach to worker instances | list(string) | `[]` | no | -| worker\_ami\_name\_filter | Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used. | string | `""` | no | -| worker\_ami\_name\_filter\_windows | Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used. | string | `""` | no | -| worker\_ami\_owner\_id | The ID of the owner for the AMI to use for the AWS EKS workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | string | `"602401143452"` | no | -| worker\_ami\_owner\_id\_windows | The ID of the owner for the AMI to use for the AWS EKS Windows workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | string | `"801119661308"` | no | -| worker\_create\_initial\_lifecycle\_hooks | Whether to create initial lifecycle hooks provided in worker groups. | bool | `"false"` | no | -| worker\_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers_group_defaults for valid keys. | any | `[]` | no | -| worker\_groups\_launch\_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | any | `[]` | no | -| worker\_security\_group\_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster. | string | `""` | no | -| worker\_sg\_ingress\_from\_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | number | `"1025"` | no | -| workers\_additional\_policies | Additional policies to be added to workers | list(string) | `[]` | no | -| workers\_group\_defaults | Override default values for target groups. See workers_group_defaults_defaults in local.tf for valid keys. | any | `{}` | no | -| workers\_role\_name | User defined workers role name. | string | `""` | no | -| write\_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`. | bool | `"true"` | no | +| vpc_id | VPC where the cluster and workers will be deployed. | string | n/a | yes | +| worker_additional_security_group_ids | A list of additional security group ids to attach to worker instances | list(string) | `[]` | no | +| worker_ami_name_filter | Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used. | string | `""` | no | +| worker_ami_name_filter_windows | Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used. | string | `""` | no | +| worker_ami_owner_id | The ID of the owner for the AMI to use for the AWS EKS workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | string | `"602401143452"` | no | +| worker_ami_owner_id_windows | The ID of the owner for the AMI to use for the AWS EKS Windows workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | string | `"801119661308"` | no | +| worker_create_initial_lifecycle_hooks | Whether to create initial lifecycle hooks provided in worker groups. | bool | `"false"` | no | +| worker_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers_group_defaults for valid keys. | any | `[]` | no | +| worker_groups_launch_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | any | `[]` | no | +| worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster. | string | `""` | no | +| worker_sg_ingress_from_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | number | `"1025"` | no | +| workers_additional_policies | Additional policies to be added to workers | list(string) | `[]` | no | +| workers_group_defaults | Override default values for target groups. See workers_group_defaults_defaults in local.tf for valid keys. | any | `{}` | no | +| workers_role_name | User defined workers role name. | string | `""` | no | +| write_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`. | bool | `"true"` | no | ## Outputs | Name | Description | |------|-------------| -| cloudwatch\_log\_group\_name | Name of cloudwatch log group created | -| cluster\_arn | The Amazon Resource Name (ARN) of the cluster. | -| cluster\_certificate\_authority\_data | Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster. | -| cluster\_endpoint | The endpoint for your EKS Kubernetes API. | -| cluster\_iam\_role\_arn | IAM role ARN of the EKS cluster. | -| cluster\_iam\_role\_name | IAM role name of the EKS cluster. | -| cluster\_id | The name/id of the EKS cluster. | -| cluster\_oidc\_issuer\_url | The URL on the EKS cluster OIDC Issuer | -| cluster\_security\_group\_id | Security group ID attached to the EKS cluster. | -| cluster\_version | The Kubernetes server version for the EKS cluster. | -| config\_map\_aws\_auth | A kubernetes configuration to authenticate to this EKS cluster. | +| cloudwatch_log_group_name | Name of cloudwatch log group created | +| cluster_arn | The Amazon Resource Name (ARN) of the cluster. | +| cluster_certificate_authority_data | Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster. | +| cluster_endpoint | The endpoint for your EKS Kubernetes API. | +| cluster_iam_role_arn | IAM role ARN of the EKS cluster. | +| cluster_iam_role_name | IAM role name of the EKS cluster. | +| cluster_id | The name/id of the EKS cluster. | +| cluster_oidc_issuer_url | The URL on the EKS cluster OIDC Issuer | +| cluster_security_group_id | Security group ID attached to the EKS cluster. | +| cluster_version | The Kubernetes server version for the EKS cluster. | +| config_map_aws_auth | A kubernetes configuration to authenticate to this EKS cluster. | | kubeconfig | kubectl config file contents for this EKS cluster. | -| kubeconfig\_filename | The filename of the generated kubectl config. | -| node\_groups | Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys | -| oidc\_provider\_arn | The ARN of the OIDC Provider if `enable_irsa = true`. | -| worker\_autoscaling\_policy\_arn | ARN of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` | -| worker\_autoscaling\_policy\_name | Name of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` | -| worker\_iam\_instance\_profile\_arns | default IAM instance profile ARN for EKS worker groups | -| worker\_iam\_instance\_profile\_names | default IAM instance profile name for EKS worker groups | -| worker\_iam\_role\_arn | default IAM role ARN for EKS worker groups | -| worker\_iam\_role\_name | default IAM role name for EKS worker groups | -| worker\_security\_group\_id | Security group ID attached to the EKS workers. | -| workers\_asg\_arns | IDs of the autoscaling groups containing workers. | -| workers\_asg\_names | Names of the autoscaling groups containing workers. | -| workers\_default\_ami\_id | ID of the default worker group AMI | -| workers\_launch\_template\_arns | ARNs of the worker launch templates. | -| workers\_launch\_template\_ids | IDs of the worker launch templates. | -| workers\_launch\_template\_latest\_versions | Latest versions of the worker launch templates. | -| workers\_user\_data | User data of worker groups | +| kubeconfig_filename | The filename of the generated kubectl config. | +| node_groups | Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys | +| oidc_provider_arn | The ARN of the OIDC Provider if `enable_irsa = true`. | +| worker_autoscaling_policy_arn | ARN of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` | +| worker_autoscaling_policy_name | Name of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` | +| worker_iam_instance_profile_arns | default IAM instance profile ARN for EKS worker groups | +| worker_iam_instance_profile_names | default IAM instance profile name for EKS worker groups | +| worker_iam_role_arn | default IAM role ARN for EKS worker groups | +| worker_iam_role_name | default IAM role name for EKS worker groups | +| worker_security_group_id | Security group ID attached to the EKS workers. | +| workers_asg_arns | IDs of the autoscaling groups containing workers. | +| workers_asg_names | Names of the autoscaling groups containing workers. | +| workers_default_ami_id | ID of the default worker group AMI | +| workers_launch_template_arns | ARNs of the worker launch templates. | +| workers_launch_template_ids | IDs of the worker launch templates. | +| workers_launch_template_latest_versions | Latest versions of the worker launch templates. | +| workers_user_data | User data of worker groups | diff --git a/modules/node_groups/README.md b/modules/node_groups/README.md index 76278d9dc4..62ecba3e71 100644 --- a/modules/node_groups/README.md +++ b/modules/node_groups/README.md @@ -37,19 +37,19 @@ The role ARN specified in `var.default_iam_role_arn` will be used by default. In | Name | Description | Type | Default | Required | |------|-------------|:----:|:-----:|:-----:| -| cluster\_name | Name of parent cluster | string | n/a | yes | -| create\_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no | -| default\_iam\_role\_arn | ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults` | string | n/a | yes | -| node\_groups | Map of maps of `eks_node_groups` to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | `{}` | no | -| node\_groups\_defaults | map of maps of node groups to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | n/a | yes | +| cluster_name | Name of parent cluster | string | n/a | yes | +| create_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no | +| default_iam_role_arn | ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults` | string | n/a | yes | +| node_groups | Map of maps of `eks_node_groups` to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | `{}` | no | +| node_groups_defaults | map of maps of node groups to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | n/a | yes | | tags | A map of tags to add to all resources | map(string) | n/a | yes | -| workers\_group\_defaults | Workers group defaults from parent | any | n/a | yes | +| workers_group_defaults | Workers group defaults from parent | any | n/a | yes | ## Outputs | Name | Description | |------|-------------| -| aws\_auth\_roles | Roles for use in aws-auth ConfigMap | -| node\_groups | Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values | +| aws_auth_roles | Roles for use in aws-auth ConfigMap | +| node_groups | Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values |