-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Self managed node groups - IAM Role - Locals variables unavailable before first apply #2458
Comments
Thank you for the links @jidckii, however, I think that it's not the case. To migrate the terraform-aws-eks module to the newer version, we chose to create another module using the 19.6.0 version and import the existing resources instead of changing the current code (currently using the 17.24.0 version) and applying the changes. We made this decision just in case we need to change the EKS resources, we'll still have a functional terraform configuration that can be changed anytime. |
I am having the exact same problem and I'm migrating from v18.x to v19.x. |
Found it what is going on here! This is a limitation of the values on for_each, also described in the first paragraph of this section at Terraform for_each doc. |
@ra-ramos, are you experiencing the exact same problem, or your problem is related to the EKS Managed Node Groups instead? |
I'm also experiencing this |
Ok I've got quite a duty of a comment to make: First off, this issue seems to apply to the following issues: In #1753 (comment) @bryantbiggs says he made a PR to talk about the issue in the README. It no longer appears to exist per issue #2265 (comment) where he says to check the FAQ. At time of writing I don't see this documentation, but it still exists in the diff of his PR here https://github.com/terraform-aws-modules/terraform-aws-eks/pull/1766/files He seems to advise that you make the policy yourself, then I assume set create_iam_role to false. Doing this produces the error seen in this issue #2411. This is because the following code: for_each = { for k, v in toset(compact([
"${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
"${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
var.iam_role_attach_cni_policy ? local.cni_policy : "",
])) : k => v if var.create && var.create_iam_role } Still has to read some variables that are determined after apply. I fixed this by changing the code in my branch to: for_each = var.create && var.create_iam_role ? { for k, v in toset(compact([
"${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
"${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
var.iam_role_attach_cni_policy ? local.cni_policy : "",
])) : k => v } : {} Which evaluates user variables first, then if they are true evaluates the list comprehension. This works on my branch when var.create_iam_role is set to false. So I would:
|
Even if you evaluate the This behavior is a limitation over the for_each statement which requires the keys on that map to be known values before apply as you can see here https://developer.hashicorp.com/terraform/language/meta-arguments/for_each#limitations-on-values-used-in-for_each At the time when I submitted a PR related to this issue, I set up a local backend to test this behavior using terraform: # backend.tf
terraform {
backend "local" {
path = "terraform.tfstate"
}
}
# terraform.tf
locals {
attach_extra_random_string = true
map_with_known_keys = merge({
FirstRandomString = random_string.one.result,
SecondRandomString = random_string.two.result
}, local.attach_extra_random_string ? {
ExtraRandomString = random_string.four.result
} : {})
}
resource "random_string" "one" {
length = 10
special = false
}
resource "random_string" "two" {
length = 10
special = false
}
resource "random_string" "four" {
length = 10
special = false
}
resource "null_resource" "set_strings" {
for_each = { for k, v in toset(compact([
random_string.one.result,
random_string.two.result,
"",
random_string.four.result
])) : k => v }
provisioner "local-exec" {
command = "echo ${each.value}"
}
}
resource "null_resource" "list_strings" {
for_each = { for k, v in tolist(compact([
random_string.one.result,
random_string.two.result,
"",
random_string.four.result
])) : k => v }
provisioner "local-exec" {
command = "echo ${each.value}"
}
}
resource "null_resource" "map_with_known_keys" {
for_each = { for k, v in local.map_with_known_keys : k => v }
provisioner "local-exec" {
command = "echo ${each.value}"
}
} In this example, you can realize if you run an init and a plan, the @ryanpeach I see your comment on my closed PR, can you put your concerns here so I can work on this later? |
Instead of trying proposed solutions - what if we provide a reproduction that demonstrates the issue? |
I understand the desire to have a reproduction, but I, unfortunately, have too much on my plate. You are seeing a lot of issues on this one line and my change disables that line in a way the user can then fix it themselves. |
And yet, nobody is able to provide a reproduction. I see at least 4 different individuals here - I don't know that that warrants |
I'm sorry for the late reply, I'll post an example to reproduce the error by the end of the day |
Here is an example guys! This came from the self_managed_node_groups example. Terraform version: 1.4.0 #data.tf
data "aws_partition" "current" {}
data "aws_iam_policy_document" "node_assume_policy" {
statement {
sid = "EKSWorkersAssumeRole"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
statement {
sid = "SSMManagedInstance"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ssm.amazonaws.com"]
}
}
}
#eks.tf
################################################################################
# EKS Module
################################################################################
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.10.0"
cluster_name = local.cluster_name
cluster_version = local.cluster_version
cluster_endpoint_public_access = true
create_node_security_group = true
cluster_addons = {
coredns = {
most_recent = true
}
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
create_aws_auth_configmap = true
manage_aws_auth_configmap = true
# Cluster IAM roles
create_iam_role = true
iam_role_name = local.cluster_name
iam_role_use_name_prefix = true
iam_role_path = "/"
# Self Managed Node Groups
self_managed_node_group_defaults = {
create_iam_instance_profile = false
iam_instance_profile_arn = aws_iam_instance_profile.nodes.arn
iam_role_attach_cni_policy = true
suspended_processes = ["AZRebalance"]
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 75
volume_type = "gp3"
}
}
}
network_interfaces = [
{
associate_public_ip_address = true
delete_on_termination = true
}
]
autoscaling_group_tags = {
"k8s.io/cluster-autoscaler/enabled" : true,
"k8s.io/cluster-autoscaler/${local.cluster_name}" : "owned",
}
}
self_managed_node_groups = {
spot = {
name = "spot-ig"
subnet_ids = module.vpc.public_subnets
min_size = 0
max_size = 7
desired_size = 0
ami_id = data.aws_ami.eks_default.id
instance_type = "m6i.large"
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
}
on-demand = {
name = "on-demand-ig"
subnet_ids = module.vpc.public_subnets
min_size = 1
max_size = 3
desired_size = 1
ami_id = data.aws_ami.eks_default.id
instance_type = "m6i.large"
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=on-demand'"
}
}
depends_on = [
aws_iam_instance_profile.nodes,
aws_iam_role.nodes,
aws_iam_role_policy_attachment.ssm,
aws_iam_role_policy_attachment.ecr_read_only,
aws_iam_role_policy_attachment.cni,
aws_iam_role_policy_attachment.worker,
]
}
################################################################################
# Supporting Resource
################################################################################
data "aws_ami" "eks_default" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amazon-eks-node-${local.cluster_version}-v*"]
}
}
#iam.tf
resource "aws_iam_role" "nodes" {
name = "${local.cluster_name}-nodes"
assume_role_policy = data.aws_iam_policy_document.node_assume_policy.json
}
resource "aws_iam_instance_profile" "nodes" {
name = "${local.cluster_name}-nodes-instance-profile"
role = aws_iam_role.nodes.name
}
resource "aws_iam_role_policy_attachment" "ssm" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonSSMManagedInstanceCore"
}
resource "aws_iam_role_policy_attachment" "ecr_read_only" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEC2ContainerRegistryReadOnly"
}
resource "aws_iam_role_policy_attachment" "cni" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEKS_CNI_Policy"
}
resource "aws_iam_role_policy_attachment" "worker" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEKSWorkerNodePolicy"
}
#locals.tf
locals {
cluster_name = "ex-${replace(basename(path.cwd), "_", "-")}"
cluster_version = "1.24"
region = "us-east-1"
policy_prefix = "aws:${data.aws_partition.current.partition}:iam::aws:policy"
}
# providers.tf
provider "aws" {
region = local.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
#vpc.tf
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
name = "test-vpc"
cidr = "172.0.0.0/16"
azs = ["us-east-1a", "us-east-1f"]
private_subnets = ["172.0.1.0/24", "172.0.2.0/24"]
public_subnets = ["172.0.3.0/24", "172.0.4.0/24"]
create_elasticache_subnet_group = false
create_database_subnet_group = false
create_redshift_subnet_group = false
enable_nat_gateway = false
enable_vpn_gateway = false
enable_ipv6 = true
enable_dns_hostnames = true
enable_dns_support = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = {
Terraform = "true"
Environment = "test"
}
} |
This is only an example that I tweaked from the self_managed_node_group example (not the real stuff that I have at the company). I'm not so sure if my example bootstraps a functional EKS cluster because of the VPC cidr ranges are to short and the error on the To avoid any misconfigurations based on the VPC module, I changed the entire VPC code to be the same as the VPC definition on the examples/self_managed_node_group/main.tf file, and still presents the same problem. I'll post here the same example (with just the VPC differences) and the whole Terraform code: #data.tf
data "aws_partition" "current" {}
data "aws_iam_policy_document" "node_assume_policy" {
statement {
sid = "EKSWorkersAssumeRole"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
statement {
sid = "SSMManagedInstance"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ssm.amazonaws.com"]
}
}
}
#eks.tf
################################################################################
# EKS Module
################################################################################
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.10.0"
cluster_name = local.cluster_name
cluster_version = local.cluster_version
cluster_endpoint_public_access = true
create_node_security_group = true
cluster_addons = {
coredns = {
most_recent = true
}
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
create_aws_auth_configmap = true
manage_aws_auth_configmap = true
# Cluster IAM roles
create_iam_role = true
iam_role_name = local.cluster_name
iam_role_use_name_prefix = true
iam_role_path = "/"
enable_irsa = false
cluster_encryption_config = {}
# Self Managed Node Groups
self_managed_node_group_defaults = {
create_iam_instance_profile = false #You can even set
iam_instance_profile_arn = aws_iam_instance_profile.nodes.arn
iam_role_attach_cni_policy = true
suspended_processes = ["AZRebalance"]
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 75
volume_type = "gp3"
}
}
}
network_interfaces = [
{
associate_public_ip_address = true
delete_on_termination = true
}
]
autoscaling_group_tags = {
"k8s.io/cluster-autoscaler/enabled" : true,
"k8s.io/cluster-autoscaler/${local.cluster_name}" : "owned",
}
}
self_managed_node_groups = {
spot = {
name = "spot-ig"
subnet_ids = module.vpc.public_subnets
min_size = 0
max_size = 7
desired_size = 0
ami_id = data.aws_ami.eks_default.id
instance_type = "m6i.large"
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
}
on-demand = {
name = "on-demand-ig"
subnet_ids = module.vpc.public_subnets
min_size = 1
max_size = 3
desired_size = 1
ami_id = data.aws_ami.eks_default.id
instance_type = "m6i.large"
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=on-demand'"
}
}
depends_on = [
aws_iam_instance_profile.nodes,
aws_iam_role.nodes,
aws_iam_role_policy_attachment.ssm,
aws_iam_role_policy_attachment.ecr_read_only,
aws_iam_role_policy_attachment.cni,
aws_iam_role_policy_attachment.worker,
]
}
################################################################################
# Supporting Resource
################################################################################
data "aws_ami" "eks_default" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amazon-eks-node-${local.cluster_version}-v*"]
}
}
#iam.tf
resource "aws_iam_role" "nodes" {
name = "${local.cluster_name}-nodes"
assume_role_policy = data.aws_iam_policy_document.node_assume_policy.json
}
resource "aws_iam_instance_profile" "nodes" {
name = "${local.cluster_name}-nodes-instance-profile"
role = aws_iam_role.nodes.name
}
resource "aws_iam_role_policy_attachment" "ssm" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonSSMManagedInstanceCore"
}
resource "aws_iam_role_policy_attachment" "ecr_read_only" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEC2ContainerRegistryReadOnly"
}
resource "aws_iam_role_policy_attachment" "cni" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEKS_CNI_Policy"
}
resource "aws_iam_role_policy_attachment" "worker" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEKSWorkerNodePolicy"
}
#locals.tf
locals {
cluster_name = "ex-${replace(basename(path.cwd), "_", "-")}"
cluster_version = "1.24"
region = "us-east-1"
policy_prefix = "aws:${data.aws_partition.current.partition}:iam::aws:policy"
azs = ["us-east-1a", "us-east-1f"]
vpc_cidr = "10.0.0.0/16"
}
#provider.tf
provider "aws" {
region = local.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
#vpc.tf
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
name = local.cluster_name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]
enable_dns_hostnames = true
enable_dns_support = true
enable_ipv6 = true
enable_flow_log = true
create_flow_log_cloudwatch_iam_role = true
create_flow_log_cloudwatch_log_group = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = {
Terraform = "true"
Environment = "test"
}
}
Terraform plan output:
|
This issue has been automatically marked as stale because it has been open 30 days |
Sorry for the late reply, but I think this is not a stale issue at all. Yesterday, while I'm debugging this issue, I tried to run the exact same code without the The |
You should not use |
I have this issue with context
Error: Invalid for_each argument |
It was switched from
to
in this commit which is part of v19. Switching this module (eks_managed_node_group only) to latest v18 is a workaround for me. |
See this to configure VPC CNI before nodes are launched
|
@bryantbiggs that will not work for AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG as also k8s recources need to be launched (kind: ENIConfig) |
correct - but you don't need compute for that. Here is an example that does this https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/examples/vpc-cni-custom-networking |
interesting, still it seems not to work. It gets deployed but is the linked solution tested that pods are indeed given an ip from the right subnet at startup? From my conclusions the initial nodes (eks managed) still don't use that config initially. After restarting/replacing the nodes it works. ➜ test git:(cni_separate_nodes_2) ✗ k get pods -A -o yaml | grep 'podIP: 10.116.' | wc -l Separate from my issue, "It seems like keys are used on the for-each which are only derived at apply." Does this seem like a problem to be fixed in the module? |
No, its a user configuration problem due to the use of |
I don't use Only this:
I just wan't to make sure the eniconfigs are applied before the nodegroup is created. So how is this an issue with user config? |
Because Terraform states that this is not supported and to not do this due to its negative side effects which you are reporting hashicorp/terraform#30340 (comment) |
I got same problem. Actually, above codes are working well in the plan output. |
Static keys are used in the current implementation - are you using a |
@bryantbiggs, I know now that using However, on this issue, we discuss a few workarounds to the problem, and on my closed PR (#2502), I provide a solution to the topic, using this module as a base to successfully manage three EKS clusters without any problems. The only side effect that these changes cause is managing the IAM roles and policies in a different way, changing the way that we stored this on the state (for that might be needed using one or two My whole point here is this: Since we have ways to work with this module without any major side effects using |
why should this module take on this burden and support a practice that Terraform has directly stated you should not do, instead of users modifying the way in which they are managing their Terraform? |
Because this is a recurrent issue that, unfortunately, got stalled a few times and got more workarounds than I thought. One of them is previously discussed in issue #1753. But on that issue, they discussed two different problems and both of them have origin on hashicorp/terraform#4149 as you mentioned here I'm really sorry to be so annoying on this, but I'm not defending any solutions right now, I'm defending the existence of this as an issue |
you are conflating multiple different things - in v18 the variable was a list of strings which was converted to a map via in v19, we changed the variable to be a map of strings in order to follow what is prescribe as the current solution to 4149 and 30937 by the Terraform maintainers - which is to use a static value in the keys of maps So now we come back to here and why its so VITALLY important for users to provide reproductions to properly troubleshoot. Had you provided a computed value or used Here is a corrected example of the reproduction you provided - it does not suffer from any of the issues you have described: provider "aws" {
region = local.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
locals {
cluster_name = "example"
region = "us-east-1"
policy_prefix = "arn:aws:iam::aws:policy"
azs = ["us-east-1a", "us-east-1f"]
vpc_cidr = "10.0.0.0/16"
}
################################################################################
# EKS Module
################################################################################
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.12"
cluster_name = local.cluster_name
cluster_version = "1.24"
cluster_endpoint_public_access = true
create_node_security_group = true
cluster_addons = {
coredns = {
most_recent = true
}
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
create_aws_auth_configmap = true
manage_aws_auth_configmap = true
aws_auth_roles = [
{
rolearn = aws_iam_role.nodes.arn
username = "system:node:{{EC2PrivateDNSName}}"
groups = [
"system:bootstrappers",
"system:nodes",
]
},
]
enable_irsa = false
cluster_encryption_config = {}
# Self Managed Node Groups
self_managed_node_group_defaults = {
create_iam_instance_profile = false
iam_instance_profile_arn = aws_iam_instance_profile.nodes.arn
suspended_processes = ["AZRebalance"]
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 75
volume_type = "gp3"
}
}
}
subnet_ids = module.vpc.public_subnets
network_interfaces = [
{
associate_public_ip_address = true
delete_on_termination = true
}
]
autoscaling_group_tags = {
"k8s.io/cluster-autoscaler/enabled" : true,
"k8s.io/cluster-autoscaler/${local.cluster_name}" : "owned",
}
}
self_managed_node_groups = {
spot = {
name = "spot-ig"
min_size = 0
max_size = 7
desired_size = 0
instance_type = "m6i.large"
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
}
on-demand = {
name = "on-demand-ig"
min_size = 1
max_size = 3
desired_size = 1
instance_type = "m6i.large"
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=on-demand'"
}
}
tags = {
Terraform = "true"
Environment = "test"
}
}
################################################################################
# Supporting Resource
################################################################################
data "aws_iam_policy_document" "node_assume_policy" {
statement {
sid = "EKSWorkersAssumeRole"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"ec2.amazonaws.com",
"ssm.amazonaws.com",
]
}
}
}
resource "aws_iam_role" "nodes" {
name = "${local.cluster_name}-nodes"
assume_role_policy = data.aws_iam_policy_document.node_assume_policy.json
}
resource "aws_iam_instance_profile" "nodes" {
name = "${local.cluster_name}-nodes-instance-profile"
role = aws_iam_role.nodes.name
}
resource "aws_iam_role_policy_attachment" "ssm" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonSSMManagedInstanceCore"
}
resource "aws_iam_role_policy_attachment" "ecr_read_only" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEC2ContainerRegistryReadOnly"
}
resource "aws_iam_role_policy_attachment" "cni" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEKS_CNI_Policy"
}
resource "aws_iam_role_policy_attachment" "worker" {
role = aws_iam_role.nodes.name
policy_arn = "${local.policy_prefix}/AmazonEKSWorkerNodePolicy"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
name = local.cluster_name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]
enable_dns_hostnames = true
enable_dns_support = true
enable_ipv6 = true
enable_flow_log = true
create_flow_log_cloudwatch_iam_role = true
create_flow_log_cloudwatch_log_group = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = {
Terraform = "true"
Environment = "test"
}
} |
@bryantbiggs Thanks, I understand this issue! I'll search for any ways without depends_on. |
Thanks for the explanation. I understand the depends-on part. It will make a depend_on entry for every sub resource eventually creating issues with apply-time evaluations of mostly data-sources. I converted my code from:
to
Having said that, and I do understand the craziness of maintaining this across mentioned issues, from an engineering perspective if an option is explicitly disabled ( create_iam_role= false, iam_role_attach_cni_policy=false) preferably the underlying resources should not be evaluated/looped through and internally cause issues. But the bigger picture here may play a role. |
closing out issue for now - please let me know if there is anything left unanswered |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
We are trying to upgrade our terraform-aws-eks version from v17.24.0 to v19.6.0. We want to use the Self Managed Node Groups to replace the former Worker Groups and manage any worker IAM resources outside the module, mainly because we need more permissions (on the assume_role_policy) for the IAM role that would be used by the IAM instance profile attached to the Node Groups.
Actual behavior
However, while working on this we got this error, after a
terragrunt plan:
We don't want to run
terragrunt apply
two times (one using -target planning option as mentioned on the output and one to apply all remaining resources) to get the module deployed.Expected behavior
A regular output from a
terragrunt plan
command.Versions
Reproduction Code
I just displayed the code part relevant to this problem. We are not define anything related to the IAM on the self_managed_node_group variable. Feel free to ask about more information if makes sense.
The text was updated successfully, but these errors were encountered: