Skip to content

Commit 262b480

Browse files
authored
docs: Re-organize documentation for easier navigation and support for references in issues/PRs (#1981)
1 parent f7b4798 commit 262b480

File tree

12 files changed

+639
-778
lines changed

12 files changed

+639
-778
lines changed

.github/CONTRIBUTING.md

Lines changed: 0 additions & 33 deletions
This file was deleted.

README.md

Lines changed: 46 additions & 673 deletions
Large diffs are not rendered by default.

docs/README.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
# Documentation
2+
3+
## Table of Contents
4+
5+
- [Frequently Asked Questions](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md)
6+
- [Compute Resources](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/compute_resources.md)
7+
- [IRSA Integration](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/irsa-integration.md)
8+
- [User Data](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/user_data.md)
9+
- [Network Connectivity](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/network_connectivity.md)
10+
- Upgrade Guides
11+
- [Upgrade to v17.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-17.0.md)
12+
- [Upgrade to v18.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-18.0.md)
File renamed without changes.

UPGRADE-18.0.md renamed to docs/UPGRADE-18.0.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
Please consult the `examples` directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.
44

5+
Note: please see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1744 where users have shared their steps/information for their individual configurations. Due to the numerous configuration possibilities, it is difficult to capture specific steps that will work for all and this has been a very helpful issue for others to share they were able to upgrade.
6+
57
## List of backwards incompatible changes
68

79
- Launch configuration support has been removed and only launch template is supported going forward. AWS is no longer adding new features back into launch configuration and their docs state [`We strongly recommend that you do not use launch configurations. They do not provide full functionality for Amazon EC2 Auto Scaling or Amazon EC2. We provide information about launch configurations for customers who have not yet migrated from launch configurations to launch templates.`](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html)

docs/compute_resourcs.md

Lines changed: 209 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,209 @@
1+
# Compute Resources
2+
3+
## Table of Contents
4+
5+
- [EKS Managed Node Groups](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#eks-managed-node-groups)
6+
- [Self Managed Node Groups](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#self-managed-node-groups)
7+
- [Fargate Profiles](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#fargate-profiles)
8+
- [Default Configurations](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#default-configurations)
9+
10+
ℹ️ Only the pertinent attributes are shown below for brevity
11+
12+
### EKS Managed Node Groups
13+
14+
Refer to the [EKS Managed Node Group documentation](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) documentation for service related details.
15+
16+
1. The module creates a custom launch template by default to ensure settings such as tags are propagated to instances. To use the default template provided by the AWS EKS managed node group service, disable the launch template creation and set the `launch_template_name` to an empty string:
17+
18+
```hcl
19+
eks_managed_node_groups = {
20+
default = {
21+
create_launch_template = false
22+
launch_template_name = ""
23+
}
24+
}
25+
```
26+
27+
2. Native support for Bottlerocket OS is provided by providing the respective AMI type:
28+
29+
```hcl
30+
eks_managed_node_groups = {
31+
bottlerocket_default = {
32+
create_launch_template = false
33+
launch_template_name = ""
34+
35+
ami_type = "BOTTLEROCKET_x86_64"
36+
platform = "bottlerocket"
37+
}
38+
}
39+
```
40+
41+
3. Users have limited support to extend the user data that is pre-pended to the user data provided by the AWS EKS Managed Node Group service:
42+
43+
```hcl
44+
eks_managed_node_groups = {
45+
prepend_userdata = {
46+
# See issue https://github.com/awslabs/amazon-eks-ami/issues/844
47+
pre_bootstrap_user_data = <<-EOT
48+
#!/bin/bash
49+
set -ex
50+
cat <<-EOF > /etc/profile.d/bootstrap.sh
51+
export CONTAINER_RUNTIME="containerd"
52+
export USE_MAX_PODS=false
53+
export KUBELET_EXTRA_ARGS="--max-pods=110"
54+
EOF
55+
# Source extra environment variables in bootstrap script
56+
sed -i '/^set -o errexit/a\\nsource /etc/profile.d/bootstrap.sh' /etc/eks/bootstrap.sh
57+
EOT
58+
}
59+
}
60+
```
61+
62+
4. Bottlerocket OS is supported in a similar manner. However, note that the user data for Bottlerocket OS uses the TOML format:
63+
64+
```hcl
65+
eks_managed_node_groups = {
66+
bottlerocket_prepend_userdata = {
67+
ami_type = "BOTTLEROCKET_x86_64"
68+
platform = "bottlerocket"
69+
70+
bootstrap_extra_args = <<-EOT
71+
# extra args added
72+
[settings.kernel]
73+
lockdown = "integrity"
74+
EOT
75+
}
76+
}
77+
```
78+
79+
5. When using a custom AMI, the AWS EKS Managed Node Group service will NOT inject the necessary bootstrap script into the supplied user data. Users can elect to provide their own user data to bootstrap and connect or opt in to use the module provided user data:
80+
81+
```hcl
82+
eks_managed_node_groups = {
83+
custom_ami = {
84+
ami_id = "ami-0caf35bc73450c396"
85+
86+
# By default, EKS managed node groups will not append bootstrap script;
87+
# this adds it back in using the default template provided by the module
88+
# Note: this assumes the AMI provided is an EKS optimized AMI derivative
89+
enable_bootstrap_user_data = true
90+
91+
bootstrap_extra_args = "--container-runtime containerd --kubelet-extra-args '--max-pods=20'"
92+
93+
pre_bootstrap_user_data = <<-EOT
94+
export CONTAINER_RUNTIME="containerd"
95+
export USE_MAX_PODS=false
96+
EOT
97+
98+
# Because we have full control over the user data supplied, we can also run additional
99+
# scripts/configuration changes after the bootstrap script has been run
100+
post_bootstrap_user_data = <<-EOT
101+
echo "you are free little kubelet!"
102+
EOT
103+
}
104+
}
105+
```
106+
107+
6. There is similar support for Bottlerocket OS:
108+
109+
```hcl
110+
eks_managed_node_groups = {
111+
bottlerocket_custom_ami = {
112+
ami_id = "ami-0ff61e0bcfc81dc94"
113+
platform = "bottlerocket"
114+
115+
# use module user data template to bootstrap
116+
enable_bootstrap_user_data = true
117+
# this will get added to the template
118+
bootstrap_extra_args = <<-EOT
119+
# extra args added
120+
[settings.kernel]
121+
lockdown = "integrity"
122+
123+
[settings.kubernetes.node-labels]
124+
"label1" = "foo"
125+
"label2" = "bar"
126+
127+
[settings.kubernetes.node-taints]
128+
"dedicated" = "experimental:PreferNoSchedule"
129+
"special" = "true:NoSchedule"
130+
EOT
131+
}
132+
}
133+
```
134+
135+
See the [`examples/eks_managed_node_group/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group) for a working example of various configurations.
136+
137+
### Self Managed Node Groups
138+
139+
Refer to the [Self Managed Node Group documentation](https://docs.aws.amazon.com/eks/latest/userguide/worker.html) documentation for service related details.
140+
141+
1. The `self-managed-node-group` uses the latest AWS EKS Optimized AMI (Linux) for the given Kubernetes version by default:
142+
143+
```hcl
144+
cluster_version = "1.21"
145+
146+
# This self managed node group will use the latest AWS EKS Optimized AMI for Kubernetes 1.21
147+
self_managed_node_groups = {
148+
default = {}
149+
}
150+
```
151+
152+
2. To use Bottlerocket, specify the `platform` as `bottlerocket` and supply a Bottlerocket OS AMI:
153+
154+
```hcl
155+
cluster_version = "1.21"
156+
157+
self_managed_node_groups = {
158+
bottlerocket = {
159+
platform = "bottlerocket"
160+
ami_id = data.aws_ami.bottlerocket_ami.id
161+
}
162+
}
163+
```
164+
165+
See the [`examples/self_managed_node_group/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/self_managed_node_group) for a working example of various configurations.
166+
167+
### Fargate Profiles
168+
169+
Fargate profiles are straightforward to use and therefore no further details are provided here. See the [`examples/fargate_profile/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/fargate_profile) for a working example of various configurations.
170+
171+
### Default Configurations
172+
173+
Each type of compute resource (EKS managed node group, self managed node group, or Fargate profile) provides the option for users to specify a default configuration. These default configurations can be overridden from within the compute resource's individual definition. The order of precedence for configurations (from highest to least precedence):
174+
175+
- Compute resource individual configuration
176+
- Compute resource family default configuration (`eks_managed_node_group_defaults`, `self_managed_node_group_defaults`, `fargate_profile_defaults`)
177+
- Module default configuration (see `variables.tf` and `node_groups.tf`)
178+
179+
For example, the following creates 4 AWS EKS Managed Node Groups:
180+
181+
```hcl
182+
eks_managed_node_group_defaults = {
183+
ami_type = "AL2_x86_64"
184+
disk_size = 50
185+
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
186+
}
187+
188+
eks_managed_node_groups = {
189+
# Uses module default configurations overridden by configuration above
190+
default = {}
191+
192+
# This further overrides the instance types used
193+
compute = {
194+
instance_types = ["c5.large", "c6i.large", "c6d.large"]
195+
}
196+
197+
# This further overrides the instance types and disk size used
198+
persistent = {
199+
disk_size = 1024
200+
instance_types = ["r5.xlarge", "r6i.xlarge", "r5b.xlarge"]
201+
}
202+
203+
# This overrides the OS used
204+
bottlerocket = {
205+
ami_type = "BOTTLEROCKET_x86_64"
206+
platform = "bottlerocket"
207+
}
208+
}
209+
```

docs/faq.md

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
# Frequently Asked Questions
2+
3+
- [How do I manage the `aws-auth` configmap?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-do-i-manage-the-aws-auth-configmap)
4+
- [I received an error: `Error: Invalid for_each argument ...`](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#i-received-an-error-error-invalid-for_each-argument-)
5+
- [Why are nodes not being registered?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#why-are-nodes-not-being-registered)
6+
- [Why are there no changes when a node group's `desired_size` is modified?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#why-are-there-no-changes-when-a-node-groups-desired_size-is-modified)
7+
- [How can I deploy Windows based nodes?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-can-i-deploy-windows-based-nodes)
8+
- [How do I access compute resource attributes?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-do-i-access-compute-resource-attributes)
9+
10+
### How do I manage the `aws-auth` configmap?
11+
12+
TL;DR - https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1901
13+
14+
- Users can roll their own equivalent of `kubectl patch ...` using the [`null_resource`](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/9a99689cc13147f4afc426b34ba009875a28614e/examples/complete/main.tf#L301-L336)
15+
- There is a module that was created to fill this gap that provides a Kubernetes based approach to provision: https://github.com/aidanmelen/terraform-aws-eks-auth
16+
- Ideally, one of the following issues are resolved upstream for a more native experience for users:
17+
- https://github.com/aws/containers-roadmap/issues/185
18+
- https://github.com/hashicorp/terraform-provider-kubernetes/issues/723
19+
20+
### I received an error: `Error: Invalid for_each argument ...`
21+
22+
Users may encounter an error such as `Error: Invalid for_each argument - The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply ...`
23+
24+
This error is due to an upstream issue with [Terraform core](https://github.com/hashicorp/terraform/issues/4149). There are two potential options you can take to help mitigate this issue:
25+
26+
1. Create the dependent resources before the cluster => `terraform apply -target <your policy or your security group>` and then `terraform apply` for the cluster (or other similar means to just ensure the referenced resources exist before creating the cluster)
27+
28+
- Note: this is the route users will have to take for adding additional security groups to nodes since there isn't a separate "security group attachment" resource
29+
30+
2. For additional IAM policies, users can attach the policies outside of the cluster definition as demonstrated below
31+
32+
```hcl
33+
resource "aws_iam_role_policy_attachment" "additional" {
34+
for_each = module.eks.eks_managed_node_groups
35+
# you could also do the following or any combination:
36+
# for_each = merge(
37+
# module.eks.eks_managed_node_groups,
38+
# module.eks.self_managed_node_group,
39+
# module.eks.fargate_profile,
40+
# )
41+
42+
# This policy does not have to exist at the time of cluster creation. Terraform can
43+
# deduce the proper order of its creation to avoid errors during creation
44+
policy_arn = aws_iam_policy.node_additional.arn
45+
role = each.value.iam_role_name
46+
}
47+
```
48+
49+
TL;DR - Terraform resource passed into the modules map definition _must_ be known before you can apply the EKS module. The variables this potentially affects are:
50+
51+
- `cluster_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule)
52+
- `node_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule)
53+
- `iam_role_additional_policies` (i.e. - referencing an external policy resource)
54+
55+
- Setting `instance_refresh_enabled = true` will recreate your worker nodes without draining them first. It is recommended to install [aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler) for proper node draining. See the [instance_refresh](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/irsa_autoscale_refresh) example provided.
56+
57+
### Why are nodes not being registered?
58+
59+
Nodes not being able to register with the EKS control plane is generally due to networking mis-configurations.
60+
61+
1. At least one of the cluster endpoints (public or private) must be enabled.
62+
63+
If you require a public endpoint, setting up both (public and private) and restricting the public endpoint via setting `cluster_endpoint_public_access_cidrs` is recommended. More info regarding communication with an endpoint is available [here](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html).
64+
65+
2. Nodes need to be able to contact the EKS cluster endpoint. By default, the module only creates a public endpoint. To access the endpoint, the nodes need outgoing internet access:
66+
67+
- Nodes in private subnets: via a NAT gateway or instance along with the appropriate routing rules
68+
- Nodes in public subnets: ensure that nodes are launched with public IPs (enable through either the module here or your subnet setting defaults)
69+
70+
**Important: If you apply only the public endpoint and configure the `cluster_endpoint_public_access_cidrs` to restrict access, know that EKS nodes will also use the public endpoint and you must allow access to the endpoint. If not, then your nodes will fail to work correctly.**
71+
72+
3. The private endpoint can also be enabled by setting `cluster_endpoint_private_access = true`. Ensure that VPC DNS resolution and hostnames are also enabled for your VPC when the private endpoint is enabled.
73+
74+
4. Nodes need to be able to connect to other AWS services to function (download container images, make API calls to assume roles, etc.). If for some reason you cannot enable public internet access for nodes you can add VPC endpoints to the relevant services: EC2 API, ECR API, ECR DKR and S3.
75+
76+
### Why are there no changes when a node group's `desired_size` is modified?
77+
78+
The module is configured to ignore this value. Unfortunately, Terraform does not support variables within the `lifecycle` block. The setting is ignored to allow autoscaling via controllers such as cluster autoscaler or Karpenter to work properly and without interference by Terraform. Changing the desired count must be handled outside of Terraform once the node group is created.
79+
80+
### How can I deploy Windows based nodes?
81+
82+
To enable Windows support for your EKS cluster, you will need to apply some configuration manually. See the [Enabling Windows Support (Windows/MacOS/Linux)](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html#enable-windows-support).
83+
84+
In addition, Windows based nodes require an additional cluster RBAC role (`eks:kube-proxy-windows`).
85+
86+
Note: Windows based node support is limited to a default user data template that is provided due to the lack of Windows support and manual steps required to provision Windows based EKS nodes.
87+
88+
### How do I access compute resource attributes?
89+
90+
Examples of accessing the attributes of the compute resource(s) created by the root module are shown below. Note - the assumption is that your cluster module definition is named `eks` as in `module "eks" { ... }`:
91+
92+
````hcl
93+
94+
- EKS Managed Node Group attributes
95+
96+
```hcl
97+
eks_managed_role_arns = [for group in module.eks_managed_node_group : group.iam_role_arn]
98+
````
99+
100+
- Self Managed Node Group attributes
101+
102+
```hcl
103+
self_managed_role_arns = [for group in module.self_managed_node_group : group.iam_role_arn]
104+
```
105+
106+
- Fargate Profile attributes
107+
108+
```hcl
109+
fargate_profile_pod_execution_role_arns = [for group in module.fargate_profile : group.fargate_profile_pod_execution_role_arn]
110+
```

0 commit comments

Comments
 (0)