You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Worker locals/defaults moved to workers submodule
- Create separate defaults for node groups
- Workers IAM management left outside of module as both node_group and worker_groups uses them
- Add option to migrate to worker group module
@@ -266,7 +267,7 @@ Apache 2 Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraf
266
267
| <aname="input_subnets"></a> [subnets](#input\_subnets)| A list of subnets to place the EKS cluster and workers within. |`list(string)`| n/a | yes |
267
268
| <aname="input_tags"></a> [tags](#input\_tags)| A map of tags to add to all resources. Tags added to launch configuration or templates override these values for ASG Tags only. |`map(string)`|`{}`| no |
268
269
| <aname="input_vpc_id"></a> [vpc\_id](#input\_vpc\_id)| VPC where the cluster and workers will be deployed. |`string`| n/a | yes |
269
-
| <aname="input_wait_for_cluster_timeout"></a> [wait\_for\_cluster\_timeout](#wait\_for\_cluster\_timeout)|Allows for a configurable timeout (in seconds) when waiting for a cluster to come up|`number`|`300`| no |
270
+
| <aname="input_wait_for_cluster_timeout"></a> [wait\_for\_cluster\_timeout](#input\_wait\_for\_cluster\_timeout)|A timeout (in seconds) to wait for cluster to be available.|`number`|`300`| no |
270
271
| <aname="input_worker_additional_security_group_ids"></a> [worker\_additional\_security\_group\_ids](#input\_worker\_additional\_security\_group\_ids)| A list of additional security group ids to attach to worker instances |`list(string)`|`[]`| no |
271
272
| <aname="input_worker_ami_name_filter"></a> [worker\_ami\_name\_filter](#input\_worker\_ami\_name\_filter)| Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster\_version' is used. |`string`|`""`| no |
272
273
| <aname="input_worker_ami_name_filter_windows"></a> [worker\_ami\_name\_filter\_windows](#input\_worker\_ami\_name\_filter\_windows)| Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster\_version' is used. |`string`|`""`| no |
@@ -275,8 +276,9 @@ Apache 2 Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraf
275
276
| <aname="input_worker_create_cluster_primary_security_group_rules"></a> [worker\_create\_cluster\_primary\_security\_group\_rules](#input\_worker\_create\_cluster\_primary\_security\_group\_rules)| Whether to create security group rules to allow communication between pods on workers and pods using the primary cluster security group. |`bool`|`false`| no |
276
277
| <aname="input_worker_create_initial_lifecycle_hooks"></a> [worker\_create\_initial\_lifecycle\_hooks](#input\_worker\_create\_initial\_lifecycle\_hooks)| Whether to create initial lifecycle hooks provided in worker groups. |`bool`|`false`| no |
277
278
| <aname="input_worker_create_security_group"></a> [worker\_create\_security\_group](#input\_worker\_create\_security\_group)| Whether to create a security group for the workers or attach the workers to `worker_security_group_id`. |`bool`|`true`| no |
278
-
| <aname="input_worker_groups"></a> [worker\_groups](#input\_worker\_groups)| A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers\_group\_defaults for valid keys. |`any`|`[]`| no |
279
-
| <aname="input_worker_groups_launch_template"></a> [worker\_groups\_launch\_template](#input\_worker\_groups\_launch\_template)| A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers\_group\_defaults for valid keys. |`any`|`[]`| no |
279
+
| <aname="input_worker_groups"></a> [worker\_groups](#input\_worker\_groups)| A map of maps defining worker group configurations to be defined using AWS Launch Templates. See workers\_group\_defaults for valid keys. |`any`|`{}`| no |
280
+
| <aname="input_worker_groups_launch_template_legacy"></a> [worker\_groups\_launch\_template\_legacy](#input\_worker\_groups\_launch\_template\_legacy)| A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers\_group\_defaults for valid keys. |`any`|`[]`| no |
281
+
| <aname="input_worker_groups_legacy"></a> [worker\_groups\_legacy](#input\_worker\_groups\_legacy)| A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers\_group\_defaults for valid keys. |`any`|`[]`| no |
280
282
| <aname="input_worker_security_group_id"></a> [worker\_security\_group\_id](#input\_worker\_security\_group\_id)| If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster. |`string`|`""`| no |
281
283
| <aname="input_worker_sg_ingress_from_port"></a> [worker\_sg\_ingress\_from\_port](#input\_worker\_sg\_ingress\_from\_port)| Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). |`number`|`1025`| no |
282
284
| <aname="input_workers_additional_policies"></a> [workers\_additional\_policies](#input\_workers\_additional\_policies)| Additional policies to be added to workers |`list(string)`|`[]`| no |
@@ -311,6 +313,7 @@ Apache 2 Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraf
311
313
| <aname="output_node_groups"></a> [node\_groups](#output\_node\_groups)| Outputs from EKS node groups. Map of maps, keyed by var.node\_groups keys |
312
314
| <aname="output_oidc_provider_arn"></a> [oidc\_provider\_arn](#output\_oidc\_provider\_arn)| The ARN of the OIDC Provider if `enable_irsa = true`. |
313
315
| <aname="output_security_group_rule_cluster_https_worker_ingress"></a> [security\_group\_rule\_cluster\_https\_worker\_ingress](#output\_security\_group\_rule\_cluster\_https\_worker\_ingress)| Security group rule responsible for allowing pods to communicate with the EKS cluster API. |
316
+
| <aname="output_worker_groups"></a> [worker\_groups](#output\_worker\_groups)| Outputs from EKS worker groups. Map of maps, keyed by var.worker\_groups keys |
314
317
| <aname="output_worker_iam_instance_profile_arns"></a> [worker\_iam\_instance\_profile\_arns](#output\_worker\_iam\_instance\_profile\_arns)| default IAM instance profile ARN for EKS worker groups |
315
318
| <aname="output_worker_iam_instance_profile_names"></a> [worker\_iam\_instance\_profile\_names](#output\_worker\_iam\_instance\_profile\_names)| default IAM instance profile name for EKS worker groups |
316
319
| <aname="output_worker_iam_role_arn"></a> [worker\_iam\_role\_arn](#output\_worker\_iam\_role\_arn)| default IAM role ARN for EKS worker groups |
Copy file name to clipboardExpand all lines: docs/faq.md
+10-19Lines changed: 10 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## How do I customize X on the worker group's settings?
4
4
5
-
All the options that can be customized for worker groups are listed in [local.tf](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/local.tf) under `workers_group_defaults_defaults`.
5
+
All the options that can be customized for worker groups are listed in [local.tf](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/modules/worker_groups/local.tf) under `workers_group_defaults_defaults`.
6
6
7
7
Please open Issues or PRs if you think something is missing.
8
8
@@ -61,12 +61,6 @@ You need to add the tags to the VPC and subnets yourself. See the [basic example
61
61
62
62
An alternative is to use the aws provider's [`ignore_tags` variable](https://www.terraform.io/docs/providers/aws/#ignore\_tags-configuration-block). However this can also cause terraform to display a perpetual difference.
63
63
64
-
## How do I safely remove old worker groups?
65
-
66
-
You've added new worker groups. Deleting worker groups from earlier in the list causes Terraform to want to recreate all worker groups. This is a limitation with how Terraform works and the module using `count` to create the ASGs and other resources.
67
-
68
-
The safest and easiest option is to set `asg_min_size` and `asg_max_size` to 0 on the worker groups to "remove".
69
-
70
64
## Why does changing the worker group's desired count not do anything?
71
65
72
66
The module is configured to ignore this value. Unfortunately Terraform does not support variables within the `lifecycle` block.
@@ -77,9 +71,9 @@ You can change the desired count via the CLI or console if you're not using the
77
71
78
72
If you are not using autoscaling and really want to control the number of nodes via terraform then set the `asg_min_size` and `asg_max_size` instead. AWS will remove a random instance when you scale down. You will have to weigh the risks here.
79
73
80
-
## Why are nodes not recreated when the `launch_configuration`/`launch_template` is recreated?
74
+
## Why are nodes not recreated when the `launch_configuration` is recreated?
81
75
82
-
By default the ASG is not configured to be recreated when the launch configuration or template changes. Terraform spins up new instances and then deletes all the old instances in one go as the AWS provider team have refused to implement rolling updates of autoscaling groups. This is not good for kubernetes stability.
76
+
By default the ASG is not configured to be recreated when the launch configuration changes. Terraform spins up new instances and then deletes all the old instances in one go as the AWS provider team have refused to implement rolling updates of autoscaling groups. This is not good for kubernetes stability.
83
77
84
78
You need to use a process to drain and cycle the workers.
85
79
@@ -137,35 +131,32 @@ Amazon EKS clusters must contain one or more Linux worker nodes to run core syst
137
131
1. Build AWS EKS cluster with the next workers configuration (default Linux):
138
132
139
133
```
140
-
worker_groups = [
141
-
{
142
-
name = "worker-group-linux"
134
+
worker_groups = {
135
+
worker-group-linux = {
143
136
instance_type = "m5.large"
144
137
platform = "linux"
145
138
asg_desired_capacity = 2
146
139
},
147
-
]
140
+
}
148
141
```
149
142
150
143
2. Apply commands from https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html#enable-windows-support (use tab with name `Windows`)
151
144
152
145
3. Add one more worker group for Windows with required field `platform = "windows"` and update your cluster. Worker group example:
153
146
154
147
```
155
-
worker_groups = [
156
-
{
157
-
name = "worker-group-linux"
148
+
worker_groups = {
149
+
worker-group-linux = {
158
150
instance_type = "m5.large"
159
151
platform = "linux"
160
152
asg_desired_capacity = 2
161
153
},
162
-
{
163
-
name = "worker-group-windows"
154
+
worker-group-windows = {
164
155
instance_type = "m5.large"
165
156
platform = "windows"
166
157
asg_desired_capacity = 1
167
158
},
168
-
]
159
+
}
169
160
```
170
161
171
162
4. With `kubectl get nodes` you can see cluster with mixed (Linux/Windows) nodes support.
Copy file name to clipboardExpand all lines: docs/spot-instances.md
+5-43Lines changed: 5 additions & 43 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,65 +22,27 @@ Notes:
22
22
- There is an AWS blog article about this [here](https://aws.amazon.com/blogs/compute/run-your-kubernetes-workloads-on-amazon-ec2-spot-instances-with-amazon-eks/).
23
23
- Consider using [k8s-spot-rescheduler](https://github.com/pusher/k8s-spot-rescheduler) to move pods from on-demand to spot instances.
24
24
25
-
## Using Launch Configuration
26
-
27
-
Example worker group configuration that uses an ASG with launch configuration for each worker group:
Launch Template support is a recent addition to both AWS and this module. It might not be as tried and tested but it's more suitable for spot instances as it allowed multiple instance types in the same worker group:
Copy file name to clipboardExpand all lines: docs/upgrades.md
+67Lines changed: 67 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,3 +58,70 @@ Plan: 0 to add, 0 to change, 1 to destroy.
58
58
5. If everything sounds good to you, run `terraform apply`
59
59
60
60
After the first apply, we recommand you to create a new node group and let the module use the `node_group_name_prefix` (by removing the `name` argument) to generate names and avoid collision during node groups re-creation if needed, because the lifce cycle is `create_before_destroy = true`.
61
+
62
+
## Upgrade module to vXX.X.X for Worker Groups Managed as maps
63
+
64
+
In this release, we added ability to manage Worker Groups as maps (not lists) which improves the ability to add/remove worker groups.
65
+
66
+
>NOTE: The new functionality supports only creating groups using Launch Templates!
67
+
68
+
1. Run `terraform apply` with the previous module version. Make sure all changes are applied before proceeding.
69
+
70
+
2. Upgrade your module and configure your worker groups by renaming existing variable names as follows:
0 commit comments