Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Self managed node groups - IAM Role - Locals variables unavailable before first apply #2458

Closed
nathaclmpaulino opened this issue Feb 8, 2023 · 37 comments

Comments

@nathaclmpaulino
Copy link

Description

We are trying to upgrade our terraform-aws-eks version from v17.24.0 to v19.6.0. We want to use the Self Managed Node Groups to replace the former Worker Groups and manage any worker IAM resources outside the module, mainly because we need more permissions (on the assume_role_policy) for the IAM role that would be used by the IAM instance profile attached to the Node Groups.

Actual behavior

However, while working on this we got this error, after a terragrunt plan:

Error: Invalid for_each argument
 
   on .terraform/modules/eks/modules/self-managed-node-group/main.tf line 740, in resource "aws_iam_role_policy_attachment" "this":
  740:   for_each = { for k, v in toset(compact([
  741:     "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
  742:     "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
  743:     var.iam_role_attach_cni_policy ? local.cni_policy : "",
  744:   ])) : k => v if var.create && var.create_iam_instance_profile }
    ├────────────────
     │ local.cni_policy is a string, known only after apply
     │ local.iam_role_policy_prefix is a string, known only after apply
     │ var.create is true
     │ var.create_iam_instance_profile is false
     │ var.iam_role_attach_cni_policy is false
 
The "for_each" map includes keys derived from resource attributes that
cannot be determined until apply, and so Terraform cannot determine the
full set of keys that will identify the instances of this resource.
 
When working with unknown values in for_each, it's better to define the map
keys statically in your configuration and place apply-time results only in
the map values.
 
Alternatively, you could use the -target planning option to first apply
only the resources that the for_each value depends on, and then apply a
second time to fully converge.

We don't want to run terragrunt apply two times (one using -target planning option as mentioned on the output and one to apply all remaining resources) to get the module deployed.

Expected behavior

A regular output from a terragrunt plan command.

Versions

Terragrunt v0.41.0
Terraform v1.3.6
provider registry.terraform.io/hashicorp/aws v4.53.0
provider registry.terraform.io/hashicorp/tls v4.0.4
provider registry.terraform.io/hashicorp/kubernetes v2.17.0

Reproduction Code

I just displayed the code part relevant to this problem. We are not define anything related to the IAM on the self_managed_node_group variable. Feel free to ask about more information if makes sense.

module "eks" {
   source                  = "terraform-aws-modules/eks/aws"
   version                 = "19.6.0"
   
   cluster_ip_family = "ipv4"
   ... 
  self_managed_node_group_defaults          = {
    
    # IAM Instance Profile
    create_iam_instance_profile = false
    iam_instance_profile_arn = aws_iam_instance_profile.workers_iam_instance_profile.arn 
    iam_role_attach_cni_policy = false 
    ...
  }
  ...
  depends_on = [
      aws_iam_role.workers_iam_role
      aws_iam_instance_profile.workers_iam_instance_profile  
      aws_iam_role_policy_attachment.policies
  ]
}
@nathaclmpaulino
Copy link
Author

Thank you for the links @jidckii, however, I think that it's not the case.

To migrate the terraform-aws-eks module to the newer version, we chose to create another module using the 19.6.0 version and import the existing resources instead of changing the current code (currently using the 17.24.0 version) and applying the changes.

We made this decision just in case we need to change the EKS resources, we'll still have a functional terraform configuration that can be changed anytime.

@ra-ramos
Copy link

ra-ramos commented Mar 2, 2023

I am having the exact same problem and I'm migrating from v18.x to v19.x.

@nathaclmpaulino
Copy link
Author

Found it what is going on here! This is a limitation of the values on for_each, also described in the first paragraph of this section at Terraform for_each doc.

@nathaclmpaulino
Copy link
Author

@ra-ramos, are you experiencing the exact same problem, or your problem is related to the EKS Managed Node Groups instead?

@ryanpeach
Copy link

I'm also experiencing this

@ryanpeach
Copy link

ryanpeach commented Mar 8, 2023

Ok I've got quite a duty of a comment to make:

First off, this issue seems to apply to the following issues:

#2337
#1753

In #1753 (comment) @bryantbiggs says he made a PR to talk about the issue in the README. It no longer appears to exist per issue #2265 (comment) where he says to check the FAQ. At time of writing I don't see this documentation, but it still exists in the diff of his PR here https://github.com/terraform-aws-modules/terraform-aws-eks/pull/1766/files

He seems to advise that you make the policy yourself, then I assume set create_iam_role to false.

Doing this produces the error seen in this issue #2411.

This is because the following code:

  for_each = { for k, v in toset(compact([
    "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
    "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
    var.iam_role_attach_cni_policy ? local.cni_policy : "",
  ])) : k => v if var.create && var.create_iam_role }

Still has to read some variables that are determined after apply. I fixed this by changing the code in my branch to:

  for_each = var.create && var.create_iam_role ? { for k, v in toset(compact([
    "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
    "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
    var.iam_role_attach_cni_policy ? local.cni_policy : "",
  ])) : k => v } : {}

Which evaluates user variables first, then if they are true evaluates the list comprehension.

This works on my branch when var.create_iam_role is set to false.

So I would:

  1. Fix this line in the code.
  2. Fix the existance of the documentation that now appears to be absent.

@nathaclmpaulino
Copy link
Author

nathaclmpaulino commented Mar 8, 2023

Even if you evaluate the var.create && var.create_iam_role expression before the creation of the map, this issue continues to pop up if the whole expression is evaluated to true. I think we are talking about two different related issues here.

This behavior is a limitation over the for_each statement which requires the keys on that map to be known values before apply as you can see here https://developer.hashicorp.com/terraform/language/meta-arguments/for_each#limitations-on-values-used-in-for_each

At the time when I submitted a PR related to this issue, I set up a local backend to test this behavior using terraform:

# backend.tf
terraform {
  backend "local" {
    path = "terraform.tfstate"
  }
}

# terraform.tf
locals {
  attach_extra_random_string = true

  map_with_known_keys = merge({
    FirstRandomString = random_string.one.result,
    SecondRandomString = random_string.two.result
  }, local.attach_extra_random_string ? {
    ExtraRandomString = random_string.four.result
  } : {})
}
resource "random_string" "one" {
  length  = 10
  special = false
}

resource "random_string" "two" {
  length  = 10
  special = false
}

resource "random_string" "four" {
  length  = 10
  special = false 
}

resource "null_resource" "set_strings" {
  for_each = { for k, v in toset(compact([
    random_string.one.result,
    random_string.two.result,
    "",
    random_string.four.result
  ])) : k => v }
  
  provisioner "local-exec" {
    command = "echo ${each.value}"  
  }
}

resource "null_resource" "list_strings" {
  for_each = { for k, v in tolist(compact([
    random_string.one.result,
    random_string.two.result,
    "",
    random_string.four.result
  ])) : k => v }

  provisioner "local-exec" {
    command = "echo ${each.value}"
  }
}

resource "null_resource" "map_with_known_keys" {
  for_each = { for k, v in local.map_with_known_keys : k => v }

  provisioner "local-exec" {
    command = "echo ${each.value}"
  }
}

In this example, you can realize if you run an init and a plan, the null_resource.map_with_known_keys didn't show the error that I've mentioned, but the other two null_resources show the issue.

@ryanpeach I see your comment on my closed PR, can you put your concerns here so I can work on this later?

@bryantbiggs
Copy link
Member

Instead of trying proposed solutions - what if we provide a reproduction that demonstrates the issue?

@ryanpeach
Copy link

ryanpeach commented Mar 10, 2023

I understand the desire to have a reproduction, but I, unfortunately, have too much on my plate.

You are seeing a lot of issues on this one line and my change disables that line in a way the user can then fix it themselves.

@bryantbiggs
Copy link
Member

You are seeing a lot of issues on this one line and my change disables that line in a way the user can then fix it themselves.

And yet, nobody is able to provide a reproduction. I see at least 4 different individuals here - I don't know that that warrants a lot of issues on this one line. Also, we use this pattern in several places in both this module, and others. Perhaps starting with a reproduction as our issues templates state, will save a lot of time and headache

@nathaclmpaulino
Copy link
Author

I'm sorry for the late reply, I'll post an example to reproduce the error by the end of the day

@nathaclmpaulino
Copy link
Author

Here is an example guys! This came from the self_managed_node_groups example.

Terraform version: 1.4.0

#data.tf
data "aws_partition" "current" {}
data "aws_iam_policy_document" "node_assume_policy" {
  statement {
    sid = "EKSWorkersAssumeRole"


    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }

  statement {
    sid = "SSMManagedInstance"

    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ssm.amazonaws.com"]
    }
  }
}

#eks.tf
################################################################################
# EKS Module
################################################################################

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.10.0"

  cluster_name                   = local.cluster_name
  cluster_version                = local.cluster_version
  cluster_endpoint_public_access = true
  create_node_security_group     = true

  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  create_aws_auth_configmap = true
  manage_aws_auth_configmap = true

  # Cluster IAM roles
  create_iam_role          = true
  iam_role_name            = local.cluster_name
  iam_role_use_name_prefix = true
  iam_role_path            = "/"

  # Self Managed Node Groups
  self_managed_node_group_defaults = {

    create_iam_instance_profile = false
    iam_instance_profile_arn    = aws_iam_instance_profile.nodes.arn
    iam_role_attach_cni_policy  = true

    suspended_processes = ["AZRebalance"]
    block_device_mappings = {
      xvda = {
        device_name = "/dev/xvda"
        ebs = {
          volume_size = 75
          volume_type = "gp3"
        }
      }
    }

    network_interfaces = [
      {
        associate_public_ip_address = true
        delete_on_termination       = true
      }
    ]
    autoscaling_group_tags = {
      "k8s.io/cluster-autoscaler/enabled" : true,
      "k8s.io/cluster-autoscaler/${local.cluster_name}" : "owned",
    }

  }

  self_managed_node_groups = {
    spot = {
      name       = "spot-ig"
      subnet_ids = module.vpc.public_subnets

      min_size     = 0
      max_size     = 7
      desired_size = 0

      ami_id        = data.aws_ami.eks_default.id
      instance_type = "m6i.large"

      bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
    }
    on-demand = {
      name       = "on-demand-ig"
      subnet_ids = module.vpc.public_subnets

      min_size     = 1
      max_size     = 3
      desired_size = 1

      ami_id        = data.aws_ami.eks_default.id
      instance_type = "m6i.large"

      bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=on-demand'"
    }
  }


  depends_on = [
    aws_iam_instance_profile.nodes,
    aws_iam_role.nodes,
    aws_iam_role_policy_attachment.ssm,
    aws_iam_role_policy_attachment.ecr_read_only,
    aws_iam_role_policy_attachment.cni,
    aws_iam_role_policy_attachment.worker,
  ]
}

################################################################################
# Supporting Resource
################################################################################
data "aws_ami" "eks_default" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amazon-eks-node-${local.cluster_version}-v*"]
  }
}

#iam.tf
resource "aws_iam_role" "nodes" {
  name               = "${local.cluster_name}-nodes"
  assume_role_policy = data.aws_iam_policy_document.node_assume_policy.json
}

resource "aws_iam_instance_profile" "nodes" {
  name = "${local.cluster_name}-nodes-instance-profile"
  role = aws_iam_role.nodes.name
}

resource "aws_iam_role_policy_attachment" "ssm" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonSSMManagedInstanceCore"
}

resource "aws_iam_role_policy_attachment" "ecr_read_only" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEC2ContainerRegistryReadOnly"
}

resource "aws_iam_role_policy_attachment" "cni" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEKS_CNI_Policy"
}

resource "aws_iam_role_policy_attachment" "worker" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEKSWorkerNodePolicy"
}

#locals.tf

locals {
  cluster_name    = "ex-${replace(basename(path.cwd), "_", "-")}"
  cluster_version = "1.24"
  region          = "us-east-1"
  policy_prefix   = "aws:${data.aws_partition.current.partition}:iam::aws:policy"
}

# providers.tf
provider "aws" {
  region = local.region
}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

#vpc.tf
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 3.0"

  name = "test-vpc"
  cidr = "172.0.0.0/16"

  azs             = ["us-east-1a", "us-east-1f"]
  private_subnets = ["172.0.1.0/24", "172.0.2.0/24"]
  public_subnets  = ["172.0.3.0/24", "172.0.4.0/24"]

  create_elasticache_subnet_group = false
  create_database_subnet_group    = false
  create_redshift_subnet_group    = false
  enable_nat_gateway              = false
  enable_vpn_gateway              = false

  enable_ipv6          = true
  enable_dns_hostnames = true
  enable_dns_support   = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }

  tags = {
    Terraform   = "true"
    Environment = "test"
  }
}

@bryantbiggs
Copy link
Member

just at first glance - that doesn't appear to be a valid set of configs. Did you test this configuration? Just looking at the VPC alone, I don't think thats going to allow create a cluster with the given config
image

@nathaclmpaulino
Copy link
Author

This is only an example that I tweaked from the self_managed_node_group example (not the real stuff that I have at the company). I'm not so sure if my example bootstraps a functional EKS cluster because of the VPC cidr ranges are to short and the error on the terraform plan command that I mentioned when I created this issue.

To avoid any misconfigurations based on the VPC module, I changed the entire VPC code to be the same as the VPC definition on the examples/self_managed_node_group/main.tf file, and still presents the same problem. I'll post here the same example (with just the VPC differences) and the whole terraform plan output.

Terraform code:

#data.tf
data "aws_partition" "current" {}
data "aws_iam_policy_document" "node_assume_policy" {
  statement {
    sid = "EKSWorkersAssumeRole"


    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }

  statement {
    sid = "SSMManagedInstance"

    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ssm.amazonaws.com"]
    }
  }
}

#eks.tf
################################################################################
# EKS Module
################################################################################

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.10.0"

  cluster_name                   = local.cluster_name
  cluster_version                = local.cluster_version
  cluster_endpoint_public_access = true
  create_node_security_group     = true

  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  create_aws_auth_configmap = true
  manage_aws_auth_configmap = true

  # Cluster IAM roles
  create_iam_role          = true
  iam_role_name            = local.cluster_name
  iam_role_use_name_prefix = true
  iam_role_path            = "/"
  enable_irsa               = false
  cluster_encryption_config = {}

  # Self Managed Node Groups
  self_managed_node_group_defaults = {

    create_iam_instance_profile = false #You can even set 
    iam_instance_profile_arn    = aws_iam_instance_profile.nodes.arn
    iam_role_attach_cni_policy  = true

    suspended_processes = ["AZRebalance"]
    block_device_mappings = {
      xvda = {
        device_name = "/dev/xvda"
        ebs = {
          volume_size = 75
          volume_type = "gp3"
        }
      }
    }

    network_interfaces = [
      {
        associate_public_ip_address = true
        delete_on_termination       = true
      }
    ]
    autoscaling_group_tags = {
      "k8s.io/cluster-autoscaler/enabled" : true,
      "k8s.io/cluster-autoscaler/${local.cluster_name}" : "owned",
    }

  }

  self_managed_node_groups = {
    spot = {
      name       = "spot-ig"
      subnet_ids = module.vpc.public_subnets

      min_size     = 0
      max_size     = 7
      desired_size = 0

      ami_id        = data.aws_ami.eks_default.id
      instance_type = "m6i.large"

      bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
    }
    on-demand = {
      name       = "on-demand-ig"
      subnet_ids = module.vpc.public_subnets

      min_size     = 1
      max_size     = 3
      desired_size = 1

      ami_id        = data.aws_ami.eks_default.id
      instance_type = "m6i.large"

      bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=on-demand'"
    }
  }


  depends_on = [
    aws_iam_instance_profile.nodes,
    aws_iam_role.nodes,
    aws_iam_role_policy_attachment.ssm,
    aws_iam_role_policy_attachment.ecr_read_only,
    aws_iam_role_policy_attachment.cni,
    aws_iam_role_policy_attachment.worker,
  ]
}

################################################################################
# Supporting Resource
################################################################################
data "aws_ami" "eks_default" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amazon-eks-node-${local.cluster_version}-v*"]
  }
}

#iam.tf
resource "aws_iam_role" "nodes" {
  name               = "${local.cluster_name}-nodes"
  assume_role_policy = data.aws_iam_policy_document.node_assume_policy.json
}

resource "aws_iam_instance_profile" "nodes" {
  name = "${local.cluster_name}-nodes-instance-profile"
  role = aws_iam_role.nodes.name
}

resource "aws_iam_role_policy_attachment" "ssm" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonSSMManagedInstanceCore"
}

resource "aws_iam_role_policy_attachment" "ecr_read_only" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEC2ContainerRegistryReadOnly"
}

resource "aws_iam_role_policy_attachment" "cni" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEKS_CNI_Policy"
}

resource "aws_iam_role_policy_attachment" "worker" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEKSWorkerNodePolicy"
}

#locals.tf
locals {
  cluster_name    = "ex-${replace(basename(path.cwd), "_", "-")}"
  cluster_version = "1.24"
  region          = "us-east-1"
  policy_prefix   = "aws:${data.aws_partition.current.partition}:iam::aws:policy"

  azs      = ["us-east-1a", "us-east-1f"]
  vpc_cidr = "10.0.0.0/16"
}

#provider.tf
provider "aws" {
  region = local.region
}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}


#vpc.tf
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 3.0"

  name = local.cluster_name
  cidr = local.vpc_cidr

  azs             = local.azs
  private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
  public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
  intra_subnets   = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]

  enable_dns_hostnames = true
  enable_dns_support   = true
  enable_ipv6          = true

  enable_flow_log                      = true
  create_flow_log_cloudwatch_iam_role  = true
  create_flow_log_cloudwatch_log_group = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }

  tags = {
    Terraform   = "true"
    Environment = "test"
  }
}

Terraform plan output:

module.vpc.data.aws_iam_policy_document.flow_log_cloudwatch_assume_role[0]: Reading...
data.aws_partition.current: Reading...
data.aws_iam_policy_document.node_assume_policy: Reading...
module.vpc.data.aws_iam_policy_document.vpc_flow_log_cloudwatch[0]: Reading...
data.aws_ami.eks_default: Reading...
module.vpc.data.aws_iam_policy_document.flow_log_cloudwatch_assume_role[0]: Read complete after 0s [id=1021377347]
data.aws_iam_policy_document.node_assume_policy: Read complete after 0s [id=1310384110]
data.aws_partition.current: Read complete after 0s [id=aws]
module.vpc.data.aws_iam_policy_document.vpc_flow_log_cloudwatch[0]: Read complete after 0s [id=2053943846]
data.aws_ami.eks_default: Read complete after 0s [id=ami-0b4795e99297c2650]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform planned the following actions, but then encountered a problem:

  # aws_iam_instance_profile.nodes will be created
  + resource "aws_iam_instance_profile" "nodes" {
      + arn         = (known after apply)
      + create_date = (known after apply)
      + id          = (known after apply)
      + name        = "ex-terraform-nodes-instance-profile"
      + path        = "/"
      + role        = "ex-terraform-nodes"
      + tags_all    = (known after apply)
      + unique_id   = (known after apply)
    }

  # aws_iam_role.nodes will be created
  + resource "aws_iam_role" "nodes" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                      + Sid       = "EKSWorkersAssumeRole"
                    },
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ssm.amazonaws.com"
                        }
                      + Sid       = "SSMManagedInstance"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "ex-terraform-nodes"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)
    }

  # aws_iam_role_policy_attachment.cni will be created
  + resource "aws_iam_role_policy_attachment" "cni" {
      + id         = (known after apply)
      + policy_arn = "aws:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      + role       = "ex-terraform-nodes"
    }

  # aws_iam_role_policy_attachment.ecr_read_only will be created
  + resource "aws_iam_role_policy_attachment" "ecr_read_only" {
      + id         = (known after apply)
      + policy_arn = "aws:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      + role       = "ex-terraform-nodes"
    }

  # aws_iam_role_policy_attachment.ssm will be created
  + resource "aws_iam_role_policy_attachment" "ssm" {
      + id         = (known after apply)
      + policy_arn = "aws:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
      + role       = "ex-terraform-nodes"
    }

  # aws_iam_role_policy_attachment.worker will be created
  + resource "aws_iam_role_policy_attachment" "worker" {
      + id         = (known after apply)
      + policy_arn = "aws:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      + role       = "ex-terraform-nodes"
    }

  # module.eks.data.aws_caller_identity.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_caller_identity" "current" {
      + account_id = (known after apply)
      + arn        = (known after apply)
      + id         = (known after apply)
      + user_id    = (known after apply)
    }

  # module.eks.data.aws_eks_addon_version.this["coredns"] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_eks_addon_version" "this" {
      + addon_name         = "coredns"
      + id                 = (known after apply)
      + kubernetes_version = "1.24"
      + most_recent        = true
      + version            = (known after apply)
    }

  # module.eks.data.aws_eks_addon_version.this["kube-proxy"] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_eks_addon_version" "this" {
      + addon_name         = "kube-proxy"
      + id                 = (known after apply)
      + kubernetes_version = "1.24"
      + most_recent        = true
      + version            = (known after apply)
    }

  # module.eks.data.aws_eks_addon_version.this["vpc-cni"] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_eks_addon_version" "this" {
      + addon_name         = "vpc-cni"
      + id                 = (known after apply)
      + kubernetes_version = "1.24"
      + most_recent        = true
      + version            = (known after apply)
    }

  # module.eks.data.aws_iam_policy_document.assume_role_policy[0] will be read during apply
  # (config refers to values not yet known)
 <= data "aws_iam_policy_document" "assume_role_policy" {
      + id   = (known after apply)
      + json = (known after apply)

      + statement {
          + actions = [
              + "sts:AssumeRole",
            ]
          + sid     = "EKSClusterAssumeRole"

          + principals {
              + identifiers = [
                  + (known after apply),
                ]
              + type        = "Service"
            }
        }
    }

  # module.eks.data.aws_iam_session_context.current will be read during apply
  # (config refers to values not yet known)
 <= data "aws_iam_session_context" "current" {
      + arn          = (known after apply)
      + id           = (known after apply)
      + issuer_arn   = (known after apply)
      + issuer_id    = (known after apply)
      + issuer_name  = (known after apply)
      + session_name = (known after apply)
    }

  # module.eks.data.aws_partition.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_partition" "current" {
      + dns_suffix         = (known after apply)
      + id                 = (known after apply)
      + partition          = (known after apply)
      + reverse_dns_prefix = (known after apply)
    }

  # module.eks.aws_cloudwatch_log_group.this[0] will be created
  + resource "aws_cloudwatch_log_group" "this" {
      + arn               = (known after apply)
      + id                = (known after apply)
      + name              = "/aws/eks/ex-terraform/cluster"
      + name_prefix       = (known after apply)
      + retention_in_days = 90
      + skip_destroy      = false
      + tags_all          = (known after apply)
    }

  # module.eks.aws_eks_cluster.this[0] will be created
  + resource "aws_eks_cluster" "this" {
      + arn                       = (known after apply)
      + certificate_authority     = (known after apply)
      + cluster_id                = (known after apply)
      + created_at                = (known after apply)
      + enabled_cluster_log_types = [
          + "api",
          + "audit",
          + "authenticator",
        ]
      + endpoint                  = (known after apply)
      + id                        = (known after apply)
      + identity                  = (known after apply)
      + name                      = "ex-terraform"
      + platform_version          = (known after apply)
      + role_arn                  = (known after apply)
      + status                    = (known after apply)
      + tags_all                  = (known after apply)
      + version                   = "1.24"

      + kubernetes_network_config {
          + ip_family         = (known after apply)
          + service_ipv4_cidr = (known after apply)
          + service_ipv6_cidr = (known after apply)
        }

      + timeouts {}

      + vpc_config {
          + cluster_security_group_id = (known after apply)
          + endpoint_private_access   = true
          + endpoint_public_access    = true
          + public_access_cidrs       = [
              + "0.0.0.0/0",
            ]
          + security_group_ids        = (known after apply)
          + subnet_ids                = (known after apply)
          + vpc_id                    = (known after apply)
        }
    }

  # module.eks.aws_iam_role.this[0] will be created
  + resource "aws_iam_role" "this" {
      + arn                   = (known after apply)
      + assume_role_policy    = (known after apply)
      + create_date           = (known after apply)
      + force_detach_policies = true
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = "ex-terraform-"
      + path                  = "/"
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)

      + inline_policy {
          + name   = "ex-terraform"
          + policy = jsonencode(
                {
                  + Statement = [
                      + {
                          + Action   = [
                              + "logs:CreateLogGroup",
                            ]
                          + Effect   = "Deny"
                          + Resource = "*"
                        },
                    ]
                  + Version   = "2012-10-17"
                }
            )
        }
    }

  # module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = (known after apply)
    }

  # module.eks.aws_iam_role_policy_attachment.this["AmazonEKSVPCResourceController"] will be created
  + resource "aws_iam_role_policy_attachment" "this" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = (known after apply)
    }

  # module.eks.aws_security_group.cluster[0] will be created
  + resource "aws_security_group" "cluster" {
      + arn                    = (known after apply)
      + description            = "EKS cluster security group"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = "ex-terraform-cluster-"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Name" = "ex-terraform-cluster"
        }
      + tags_all               = {
          + "Name" = "ex-terraform-cluster"
        }
      + vpc_id                 = (known after apply)
    }

  # module.eks.aws_security_group.node[0] will be created
  + resource "aws_security_group" "node" {
      + arn                    = (known after apply)
      + description            = "EKS node shared security group"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = (known after apply)
      + name_prefix            = "ex-terraform-node-"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Name"                               = "ex-terraform-node"
          + "kubernetes.io/cluster/ex-terraform" = "owned"
        }
      + tags_all               = {
          + "Name"                               = "ex-terraform-node"
          + "kubernetes.io/cluster/ex-terraform" = "owned"
        }
      + vpc_id                 = (known after apply)
    }

  # module.eks.aws_security_group_rule.cluster["ingress_nodes_443"] will be created
  + resource "aws_security_group_rule" "cluster" {
      + description              = "Node groups to cluster API"
      + from_port                = 443
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["egress_all"] will be created
  + resource "aws_security_group_rule" "node" {
      + cidr_blocks              = [
          + "0.0.0.0/0",
        ]
      + description              = "Allow all egress"
      + from_port                = 0
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "-1"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 0
      + type                     = "egress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_443"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node groups"
      + from_port                = 443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_4443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 4443/tcp webhook"
      + from_port                = 4443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 4443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_6443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 6443/tcp webhook"
      + from_port                = 6443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 6443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_8443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 8443/tcp webhook"
      + from_port                = 8443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 8443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_9443_webhook"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node 9443/tcp webhook"
      + from_port                = 9443
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 9443
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_cluster_kubelet"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Cluster API to node kubelets"
      + from_port                = 10250
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 10250
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_nodes_ephemeral"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node ingress on ephemeral ports"
      + from_port                = 1025
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 65535
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_self_coredns_tcp"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node CoreDNS"
      + from_port                = 53
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 53
      + type                     = "ingress"
    }

  # module.eks.aws_security_group_rule.node["ingress_self_coredns_udp"] will be created
  + resource "aws_security_group_rule" "node" {
      + description              = "Node to node CoreDNS UDP"
      + from_port                = 53
      + id                       = (known after apply)
      + prefix_list_ids          = []
      + protocol                 = "udp"
      + security_group_id        = (known after apply)
      + security_group_rule_id   = (known after apply)
      + self                     = true
      + source_security_group_id = (known after apply)
      + to_port                  = 53
      + type                     = "ingress"
    }

  # module.eks.time_sleep.this[0] will be created
  + resource "time_sleep" "this" {
      + create_duration = "30s"
      + id              = (known after apply)
      + triggers        = {
          + "cluster_name"    = "ex-terraform"
          + "cluster_version" = "1.24"
        }
    }

  # module.vpc.aws_cloudwatch_log_group.flow_log[0] will be created
  + resource "aws_cloudwatch_log_group" "flow_log" {
      + arn               = (known after apply)
      + id                = (known after apply)
      + name              = (known after apply)
      + name_prefix       = (known after apply)
      + retention_in_days = 0
      + skip_destroy      = false
      + tags              = {
          + "Environment" = "test"
          + "Terraform"   = "true"
        }
      + tags_all          = {
          + "Environment" = "test"
          + "Terraform"   = "true"
        }
    }

  # module.vpc.aws_egress_only_internet_gateway.this[0] will be created
  + resource "aws_egress_only_internet_gateway" "this" {
      + id       = (known after apply)
      + tags     = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform"
          + "Terraform"   = "true"
        }
      + tags_all = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform"
          + "Terraform"   = "true"
        }
      + vpc_id   = (known after apply)
    }

  # module.vpc.aws_flow_log.this[0] will be created
  + resource "aws_flow_log" "this" {
      + arn                      = (known after apply)
      + iam_role_arn             = (known after apply)
      + id                       = (known after apply)
      + log_destination          = (known after apply)
      + log_destination_type     = "cloud-watch-logs"
      + log_format               = (known after apply)
      + log_group_name           = (known after apply)
      + max_aggregation_interval = 600
      + tags                     = {
          + "Environment" = "test"
          + "Terraform"   = "true"
        }
      + tags_all                 = {
          + "Environment" = "test"
          + "Terraform"   = "true"
        }
      + traffic_type             = "ALL"
      + vpc_id                   = (known after apply)
    }

  # module.vpc.aws_iam_policy.vpc_flow_log_cloudwatch[0] will be created
  + resource "aws_iam_policy" "vpc_flow_log_cloudwatch" {
      + arn         = (known after apply)
      + id          = (known after apply)
      + name        = (known after apply)
      + name_prefix = "vpc-flow-log-to-cloudwatch-"
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "logs:PutLogEvents",
                          + "logs:DescribeLogStreams",
                          + "logs:DescribeLogGroups",
                          + "logs:CreateLogStream",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                      + Sid      = "AWSVPCFlowLogsPushToCloudWatch"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + policy_id   = (known after apply)
      + tags        = {
          + "Environment" = "test"
          + "Terraform"   = "true"
        }
      + tags_all    = {
          + "Environment" = "test"
          + "Terraform"   = "true"
        }
    }

  # module.vpc.aws_iam_role.vpc_flow_log_cloudwatch[0] will be created
  + resource "aws_iam_role" "vpc_flow_log_cloudwatch" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "vpc-flow-logs.amazonaws.com"
                        }
                      + Sid       = "AWSVPCFlowLogsAssumeRole"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = "vpc-flow-log-role-"
      + path                  = "/"
      + tags                  = {
          + "Environment" = "test"
          + "Terraform"   = "true"
        }
      + tags_all              = {
          + "Environment" = "test"
          + "Terraform"   = "true"
        }
      + unique_id             = (known after apply)
    }

  # module.vpc.aws_iam_role_policy_attachment.vpc_flow_log_cloudwatch[0] will be created
  + resource "aws_iam_role_policy_attachment" "vpc_flow_log_cloudwatch" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = (known after apply)
    }

  # module.vpc.aws_internet_gateway.this[0] will be created
  + resource "aws_internet_gateway" "this" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + tags     = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform"
          + "Terraform"   = "true"
        }
      + tags_all = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform"
          + "Terraform"   = "true"
        }
      + vpc_id   = (known after apply)
    }

  # module.vpc.aws_route.private_ipv6_egress[0] will be created
  + resource "aws_route" "private_ipv6_egress" {
      + destination_ipv6_cidr_block = "::/0"
      + egress_only_gateway_id      = (known after apply)
      + id                          = (known after apply)
      + instance_id                 = (known after apply)
      + instance_owner_id           = (known after apply)
      + network_interface_id        = (known after apply)
      + origin                      = (known after apply)
      + route_table_id              = (known after apply)
      + state                       = (known after apply)
    }

  # module.vpc.aws_route.private_ipv6_egress[1] will be created
  + resource "aws_route" "private_ipv6_egress" {
      + destination_ipv6_cidr_block = "::/0"
      + egress_only_gateway_id      = (known after apply)
      + id                          = (known after apply)
      + instance_id                 = (known after apply)
      + instance_owner_id           = (known after apply)
      + network_interface_id        = (known after apply)
      + origin                      = (known after apply)
      + route_table_id              = (known after apply)
      + state                       = (known after apply)
    }

  # module.vpc.aws_route.public_internet_gateway[0] will be created
  + resource "aws_route" "public_internet_gateway" {
      + destination_cidr_block = "0.0.0.0/0"
      + gateway_id             = (known after apply)
      + id                     = (known after apply)
      + instance_id            = (known after apply)
      + instance_owner_id      = (known after apply)
      + network_interface_id   = (known after apply)
      + origin                 = (known after apply)
      + route_table_id         = (known after apply)
      + state                  = (known after apply)

      + timeouts {
          + create = "5m"
        }
    }

  # module.vpc.aws_route.public_internet_gateway_ipv6[0] will be created
  + resource "aws_route" "public_internet_gateway_ipv6" {
      + destination_ipv6_cidr_block = "::/0"
      + gateway_id                  = (known after apply)
      + id                          = (known after apply)
      + instance_id                 = (known after apply)
      + instance_owner_id           = (known after apply)
      + network_interface_id        = (known after apply)
      + origin                      = (known after apply)
      + route_table_id              = (known after apply)
      + state                       = (known after apply)
    }

  # module.vpc.aws_route_table.intra[0] will be created
  + resource "aws_route_table" "intra" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = (known after apply)
      + tags             = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-intra"
          + "Terraform"   = "true"
        }
      + tags_all         = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-intra"
          + "Terraform"   = "true"
        }
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table.private[0] will be created
  + resource "aws_route_table" "private" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = (known after apply)
      + tags             = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-private-us-east-1a"
          + "Terraform"   = "true"
        }
      + tags_all         = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-private-us-east-1a"
          + "Terraform"   = "true"
        }
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table.private[1] will be created
  + resource "aws_route_table" "private" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = (known after apply)
      + tags             = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-private-us-east-1f"
          + "Terraform"   = "true"
        }
      + tags_all         = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-private-us-east-1f"
          + "Terraform"   = "true"
        }
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table.public[0] will be created
  + resource "aws_route_table" "public" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = (known after apply)
      + tags             = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-public"
          + "Terraform"   = "true"
        }
      + tags_all         = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-public"
          + "Terraform"   = "true"
        }
      + vpc_id           = (known after apply)
    }

  # module.vpc.aws_route_table_association.intra[0] will be created
  + resource "aws_route_table_association" "intra" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.intra[1] will be created
  + resource "aws_route_table_association" "intra" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.private[0] will be created
  + resource "aws_route_table_association" "private" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.private[1] will be created
  + resource "aws_route_table_association" "private" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[0] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_route_table_association.public[1] will be created
  + resource "aws_route_table_association" "public" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # module.vpc.aws_subnet.intra[0] will be created
  + resource "aws_subnet" "intra" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.52.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-intra-us-east-1a"
          + "Terraform"   = "true"
        }
      + tags_all                                       = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-intra-us-east-1a"
          + "Terraform"   = "true"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.intra[1] will be created
  + resource "aws_subnet" "intra" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1f"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.53.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-intra-us-east-1f"
          + "Terraform"   = "true"
        }
      + tags_all                                       = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform-intra-us-east-1f"
          + "Terraform"   = "true"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.private[0] will be created
  + resource "aws_subnet" "private" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.0.0/20"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Environment"                     = "test"
          + "Name"                            = "ex-terraform-private-us-east-1a"
          + "Terraform"                       = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Environment"                     = "test"
          + "Name"                            = "ex-terraform-private-us-east-1a"
          + "Terraform"                       = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.private[1] will be created
  + resource "aws_subnet" "private" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1f"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.16.0/20"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Environment"                     = "test"
          + "Name"                            = "ex-terraform-private-us-east-1f"
          + "Terraform"                       = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Environment"                     = "test"
          + "Name"                            = "ex-terraform-private-us-east-1f"
          + "Terraform"                       = "true"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.public[0] will be created
  + resource "aws_subnet" "public" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.48.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Environment"            = "test"
          + "Name"                   = "ex-terraform-public-us-east-1a"
          + "Terraform"              = "true"
          + "kubernetes.io/role/elb" = "1"
        }
      + tags_all                                       = {
          + "Environment"            = "test"
          + "Name"                   = "ex-terraform-public-us-east-1a"
          + "Terraform"              = "true"
          + "kubernetes.io/role/elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_subnet.public[1] will be created
  + resource "aws_subnet" "public" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1f"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.49.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Environment"            = "test"
          + "Name"                   = "ex-terraform-public-us-east-1f"
          + "Terraform"              = "true"
          + "kubernetes.io/role/elb" = "1"
        }
      + tags_all                                       = {
          + "Environment"            = "test"
          + "Name"                   = "ex-terraform-public-us-east-1f"
          + "Terraform"              = "true"
          + "kubernetes.io/role/elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # module.vpc.aws_vpc.this[0] will be created
  + resource "aws_vpc" "this" {
      + arn                                  = (known after apply)
      + assign_generated_ipv6_cidr_block     = true
      + cidr_block                           = "10.0.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_classiclink                   = (known after apply)
      + enable_classiclink_dns_support       = (known after apply)
      + enable_dns_hostnames                 = true
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform"
          + "Terraform"   = "true"
        }
      + tags_all                             = {
          + "Environment" = "test"
          + "Name"        = "ex-terraform"
          + "Terraform"   = "true"
        }
    }

  # module.eks.module.kms.data.aws_caller_identity.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_caller_identity" "current" {
      + account_id = (known after apply)
      + arn        = (known after apply)
      + id         = (known after apply)
      + user_id    = (known after apply)
    }

  # module.eks.module.kms.data.aws_partition.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_partition" "current" {
      + dns_suffix         = (known after apply)
      + id                 = (known after apply)
      + partition          = (known after apply)
      + reverse_dns_prefix = (known after apply)
    }

  # module.eks.module.self_managed_node_group["on-demand"].data.aws_ami.eks_default[0] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_ami" "eks_default" {
      + architecture          = (known after apply)
      + arn                   = (known after apply)
      + block_device_mappings = (known after apply)
      + boot_mode             = (known after apply)
      + creation_date         = (known after apply)
      + deprecation_time      = (known after apply)
      + description           = (known after apply)
      + ena_support           = (known after apply)
      + hypervisor            = (known after apply)
      + id                    = (known after apply)
      + image_id              = (known after apply)
      + image_location        = (known after apply)
      + image_owner_alias     = (known after apply)
      + image_type            = (known after apply)
      + imds_support          = (known after apply)
      + kernel_id             = (known after apply)
      + most_recent           = true
      + name                  = (known after apply)
      + owner_id              = (known after apply)
      + owners                = [
          + "amazon",
        ]
      + platform              = (known after apply)
      + platform_details      = (known after apply)
      + product_codes         = (known after apply)
      + public                = (known after apply)
      + ramdisk_id            = (known after apply)
      + root_device_name      = (known after apply)
      + root_device_type      = (known after apply)
      + root_snapshot_id      = (known after apply)
      + sriov_net_support     = (known after apply)
      + state                 = (known after apply)
      + state_reason          = (known after apply)
      + tags                  = (known after apply)
      + tpm_support           = (known after apply)
      + usage_operation       = (known after apply)
      + virtualization_type   = (known after apply)

      + filter {
          + name   = "name"
          + values = [
              + "amazon-eks-node-1.24-v*",
            ]
        }
    }

  # module.eks.module.self_managed_node_group["on-demand"].data.aws_caller_identity.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_caller_identity" "current" {
      + account_id = (known after apply)
      + arn        = (known after apply)
      + id         = (known after apply)
      + user_id    = (known after apply)
    }

  # module.eks.module.self_managed_node_group["on-demand"].data.aws_iam_policy_document.assume_role_policy[0] will be read during apply
  # (config refers to values not yet known)
 <= data "aws_iam_policy_document" "assume_role_policy" {
      + id   = (known after apply)
      + json = (known after apply)

      + statement {
          + actions = [
              + "sts:AssumeRole",
            ]
          + sid     = "EKSNodeAssumeRole"

          + principals {
              + identifiers = [
                  + (known after apply),
                ]
              + type        = "Service"
            }
        }
    }

  # module.eks.module.self_managed_node_group["on-demand"].data.aws_partition.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_partition" "current" {
      + dns_suffix         = (known after apply)
      + id                 = (known after apply)
      + partition          = (known after apply)
      + reverse_dns_prefix = (known after apply)
    }

  # module.eks.module.self_managed_node_group["on-demand"].aws_iam_instance_profile.this[0] will be created
  + resource "aws_iam_instance_profile" "this" {
      + arn         = (known after apply)
      + create_date = (known after apply)
      + id          = (known after apply)
      + name        = (known after apply)
      + name_prefix = "on-demand-ig-node-group-"
      + path        = "/"
      + role        = (known after apply)
      + tags_all    = (known after apply)
      + unique_id   = (known after apply)
    }

  # module.eks.module.self_managed_node_group["on-demand"].aws_iam_role.this[0] will be created
  + resource "aws_iam_role" "this" {
      + arn                   = (known after apply)
      + assume_role_policy    = (known after apply)
      + create_date           = (known after apply)
      + description           = "Self managed node group IAM role"
      + force_detach_policies = true
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = "on-demand-ig-node-group-"
      + path                  = "/"
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)
    }

  # module.eks.module.self_managed_node_group["spot"].data.aws_ami.eks_default[0] will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_ami" "eks_default" {
      + architecture          = (known after apply)
      + arn                   = (known after apply)
      + block_device_mappings = (known after apply)
      + boot_mode             = (known after apply)
      + creation_date         = (known after apply)
      + deprecation_time      = (known after apply)
      + description           = (known after apply)
      + ena_support           = (known after apply)
      + hypervisor            = (known after apply)
      + id                    = (known after apply)
      + image_id              = (known after apply)
      + image_location        = (known after apply)
      + image_owner_alias     = (known after apply)
      + image_type            = (known after apply)
      + imds_support          = (known after apply)
      + kernel_id             = (known after apply)
      + most_recent           = true
      + name                  = (known after apply)
      + owner_id              = (known after apply)
      + owners                = [
          + "amazon",
        ]
      + platform              = (known after apply)
      + platform_details      = (known after apply)
      + product_codes         = (known after apply)
      + public                = (known after apply)
      + ramdisk_id            = (known after apply)
      + root_device_name      = (known after apply)
      + root_device_type      = (known after apply)
      + root_snapshot_id      = (known after apply)
      + sriov_net_support     = (known after apply)
      + state                 = (known after apply)
      + state_reason          = (known after apply)
      + tags                  = (known after apply)
      + tpm_support           = (known after apply)
      + usage_operation       = (known after apply)
      + virtualization_type   = (known after apply)

      + filter {
          + name   = "name"
          + values = [
              + "amazon-eks-node-1.24-v*",
            ]
        }
    }

  # module.eks.module.self_managed_node_group["spot"].data.aws_caller_identity.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_caller_identity" "current" {
      + account_id = (known after apply)
      + arn        = (known after apply)
      + id         = (known after apply)
      + user_id    = (known after apply)
    }

  # module.eks.module.self_managed_node_group["spot"].data.aws_iam_policy_document.assume_role_policy[0] will be read during apply
  # (config refers to values not yet known)
 <= data "aws_iam_policy_document" "assume_role_policy" {
      + id   = (known after apply)
      + json = (known after apply)

      + statement {
          + actions = [
              + "sts:AssumeRole",
            ]
          + sid     = "EKSNodeAssumeRole"

          + principals {
              + identifiers = [
                  + (known after apply),
                ]
              + type        = "Service"
            }
        }
    }

  # module.eks.module.self_managed_node_group["spot"].data.aws_partition.current will be read during apply
  # (depends on a resource or a module with changes pending)
 <= data "aws_partition" "current" {
      + dns_suffix         = (known after apply)
      + id                 = (known after apply)
      + partition          = (known after apply)
      + reverse_dns_prefix = (known after apply)
    }

  # module.eks.module.self_managed_node_group["spot"].aws_iam_instance_profile.this[0] will be created
  + resource "aws_iam_instance_profile" "this" {
      + arn         = (known after apply)
      + create_date = (known after apply)
      + id          = (known after apply)
      + name        = (known after apply)
      + name_prefix = "spot-ig-node-group-"
      + path        = "/"
      + role        = (known after apply)
      + tags_all    = (known after apply)
      + unique_id   = (known after apply)
    }

  # module.eks.module.self_managed_node_group["spot"].aws_iam_role.this[0] will be created
  + resource "aws_iam_role" "this" {
      + arn                   = (known after apply)
      + assume_role_policy    = (known after apply)
      + create_date           = (known after apply)
      + description           = "Self managed node group IAM role"
      + force_detach_policies = true
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = (known after apply)
      + name_prefix           = "spot-ig-node-group-"
      + path                  = "/"
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)
    }

Plan: 57 to add, 0 to change, 0 to destroy.

Error: Invalid for_each argument

  on .terraform/modules/eks/modules/self-managed-node-group/main.tf line 740 in resource "aws_iam_role_policy_attachment" "this":
 740:   for_each = { for k, v in toset(compact([
 741:     "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
 742:     "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
 743:     var.iam_role_attach_cni_policy ? local.cni_policy : "",
   
 744:   ])) : k => v if var.create && var.create_iam_instance_profile }
    ├────────────────
    │ local.cni_policy is a string, known only after apply
    │ local.iam_role_policy_prefix is a string, known only after apply
    │ var.create is true
    │ var.create_iam_instance_profile is true
    │ var.iam_role_attach_cni_policy is true

The "for_each" map includes keys derived from resource attributes that cannot
be determined until apply, and so Terraform cannot determine the full set of
keys that will identify the instances of this resource.

When working with unknown values in for_each, it's better to define the map
keys statically in your configuration and place apply-time results only in
the map values.

Alternatively, you could use the -target planning option to first apply only
the resources that the for_each value depends on, and then apply a second
time to fully converge.

Error: Invalid for_each argument

  on .terraform/modules/eks/modules/self-managed-node-group/main.tf line 740, in resource "aws_iam_role_policy_attachment" "this":
740:   for_each = { for k, v in toset(compact([
741:     "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
742:     "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
743:     var.iam_role_attach_cni_policy ? local.cni_policy : "",
744:     var.create && var.create_iam_instance_profile }
    ├────────────────
    │ local.cni_policy is a string, known only after apply
    │ local.iam_role_policy_prefix is a string, known only after apply
    │ var.create is true
    │ var.create_iam_instance_profile is true
    │ var.iam_role_attach_cni_policy is true

The "for_each" map includes keys derived from resource attributes that cannot
be determined until apply, and so Terraform cannot determine the full set of
keys that will identify the instances of this resource.

When working with unknown values in for_each, it's better to define the map
keys statically in your configuration and place apply-time results only in
the map values.

Alternatively, you could use the -target planning option to first apply only
the resources that the for_each value depends on, and then apply a second
time to fully converge.
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.

@github-actions
Copy link

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

@github-actions github-actions bot added the stale label Apr 12, 2023
@nathaclmpaulino
Copy link
Author

Sorry for the late reply, but I think this is not a stale issue at all.

Yesterday, while I'm debugging this issue, I tried to run the exact same code without the depends_on meta argument, and it worked! (worked here, I mean, passed the planning stage)

The depends_on argument should perform an idempotent operation, which means that didn't decide if a terraform plan works or not, because the whole point of the depends_on meta argument is passing to Terraform what resources must be created before deploying others.

@bryantbiggs
Copy link
Member

You should not use depends_on on modules because it is known to cause these issues hashicorp/terraform#30340

@github-actions github-actions bot removed the stale label Apr 18, 2023
@atamgp
Copy link

atamgp commented Apr 20, 2023

I have this issue with create_iam_role = false and managed nodes module..
It seems like keys are used on the for-each which are only derived at apply.

context
Trying to create a eks-managed-node-group separate (outside the main module) without an IAM role.
Why? Because custom networking in aws cni needs to be set before nodes start an we don't have control on nodes within the module.

module "eks_managed_node_group" {
  source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git//modules/eks-managed-node-group?ref=v19.5.1"
 name                = "startup"
  cluster_name        = var.cluster_name
  cluster_version     = var.cluster_version
  cluster_endpoint    = module.eks.cluster_endpoint
  cluster_auth_base64 = module.eks.cluster_certificate_authority_data
  .....
  
  create_iam_role = false
  iam_role_attach_cni_policy = false
  iam_role_arn    = module.eks_managed_node_group_role.iam_role_arn
  
  ...
  
  }

Error: Invalid for_each argument

│ on .terraform/modules/eks.eks_managed_node_group/modules/eks-managed-node-group/main.tf line 434, in resource "aws_iam_role_policy_attachment" "this":
│ 434: for_each = { for k, v in toset(compact([
│ 435: "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
│ 436: "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
│ 437: var.iam_role_attach_cni_policy ? local.cni_policy : "",
│ 438: ])) : k => v if var.create && var.create_iam_role }
│ ├────────────────
│ │ local.cni_policy is a string, known only after apply
│ │ local.iam_role_policy_prefix is a string, known only after apply
│ │ var.create is true
│ │ var.create_iam_role is false
│ │ var.iam_role_attach_cni_policy is false

│ The "for_each" map includes keys derived from resource attributes that
│ cannot be determined until apply, and so Terraform cannot determine the
│ full set of keys that will identify the instances of this resource.

│ When working with unknown values in for_each, it's better to define the map
│ keys statically in your configuration and place apply-time results only in
│ the map values.

@atamgp
Copy link

atamgp commented Apr 20, 2023

It was switched from

for_each = var.create && var.create_iam_role ? toset(compact(distinct(concat([

to

for_each = { for k, v in toset(compact([

in this commit which is part of v19.

Switching this module (eks_managed_node_group only) to latest v18 is a workaround for me.

@bryantbiggs
Copy link
Member

Because custom networking in aws cni needs to be set before nodes start an we don't have control on nodes within the module.

See this to configure VPC CNI before nodes are launched

@atamgp
Copy link

atamgp commented Apr 21, 2023

@bryantbiggs that will not work for AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG as also k8s recources need to be launched (kind: ENIConfig)

@bryantbiggs
Copy link
Member

correct - but you don't need compute for that. Here is an example that does this https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/examples/vpc-cni-custom-networking

@atamgp
Copy link

atamgp commented Apr 24, 2023

@bryantbiggs

interesting, still it seems not to work. It gets deployed but is the linked solution tested that pods are indeed given an ip from the right subnet at startup? From my conclusions the initial nodes (eks managed) still don't use that config initially. After restarting/replacing the nodes it works.
10.116.x.x is my nodes-only subnet (not for pods) and should only be used for pods with hostNetworking:

➜ test git:(cni_separate_nodes_2) ✗ k get pods -A -o yaml | grep 'podIP: 10.116.' | wc -l
36
➜ test git:(cni_separate_nodes_2) ✗ k get pods -A -o yaml | grep 'hostNetwork: true' | wc -l
25

Separate from my issue,

"It seems like keys are used on the for-each which are only derived at apply."

Does this seem like a problem to be fixed in the module?

@bryantbiggs
Copy link
Member

"It seems like keys are used on the for-each which are only derived at apply."
Does this seem like a problem to be fixed in the module?

No, its a user configuration problem due to the use of depends_on across modules which is not supported by Terraform

@atamgp
Copy link

atamgp commented Apr 25, 2023

I don't use _depends_on_ on a whole module

Only this:

module "eks_managed_node_group"
  source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git//modules/eks-managed-node-group?ref=v19.5.1"

  cluster_endpoint       = module.eks.cluster_endpoint
  cluster_auth_base64    = module.eks.cluster_certificate_authority_data
  ...
  depends_on = [
     kubectl_manifest.eniconfigs
   ]
resource "kubectl_manifest" "eniconfigs" {
  for_each = { for k, v in local.eni_configs : k => v }
  yaml_body = templatefile("${path.module}/kubernetes-manifests/eniconfig.yaml",
    { eniconfig = each.value, node_security_group_id = module.eks.node_security_group_id, cluster_primary_security_group_id = module.eks.cluster_primary_security_group_id }
  )
}

I just wan't to make sure the eniconfigs are applied before the nodegroup is created.
If I change the above eks_managed_node_group to v18.31.2 it works.

So how is this an issue with user config?
Both, the kubectl_manifest.eniconfigs and eks_managed_node_group reference module.eks.* parts.

@bryantbiggs
Copy link
Member

So how is this an issue with user config?

Because Terraform states that this is not supported and to not do this due to its negative side effects which you are reporting hashicorp/terraform#30340 (comment)

@so-test
Copy link

so-test commented Apr 27, 2023

I got same problem.
I think it may be fixed if we use static map keys like this parts of codes.
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/main.tf#L324-L328

Actually, above codes are working well in the plan output.
#2458 (comment)
image

@bryantbiggs
Copy link
Member

Static keys are used in the current implementation - are you using a depends_on?

@nathaclmpaulino
Copy link
Author

nathaclmpaulino commented Apr 27, 2023

@bryantbiggs, I know now that using depends_on on the module is discouraged by Hashicorp based on the issue that you shared here before. Thank you for that!

However, on this issue, we discuss a few workarounds to the problem, and on my closed PR (#2502), I provide a solution to the topic, using this module as a base to successfully manage three EKS clusters without any problems.

The only side effect that these changes cause is managing the IAM roles and policies in a different way, changing the way that we stored this on the state (for that might be needed using one or two terraform state mv commands to give us a reliable state).

My whole point here is this: Since we have ways to work with this module without any major side effects using depends_on, should we fix this instead of relying on a known discouraged action (use depends_on on modules) since data sources are vital for any terraform modules and the usage of depends_on on modules is allowed by Terraform?

@bryantbiggs
Copy link
Member

why should this module take on this burden and support a practice that Terraform has directly stated you should not do, instead of users modifying the way in which they are managing their Terraform?

@nathaclmpaulino
Copy link
Author

nathaclmpaulino commented Apr 27, 2023

Because this is a recurrent issue that, unfortunately, got stalled a few times and got more workarounds than I thought.

One of them is previously discussed in issue #1753. But on that issue, they discussed two different problems and both of them have origin on hashicorp/terraform#4149 as you mentioned here

I'm really sorry to be so annoying on this, but I'm not defending any solutions right now, I'm defending the existence of this as an issue

@bryantbiggs
Copy link
Member

you are conflating multiple different things - in v18 the variable was a list of strings which was converted to a map via toset() and then for_each looped over this. This results in the issue stated in hashicorp/terraform#4149 which is now rolled up into hashicorp/terraform#30937

in v19, we changed the variable to be a map of strings in order to follow what is prescribe as the current solution to 4149 and 30937 by the Terraform maintainers - which is to use a static value in the keys of maps

So now we come back to here and why its so VITALLY important for users to provide reproductions to properly troubleshoot. Had you provided a computed value or used toset() in the value that you pass, I would have directed you to 30937 (I have all of these common issues readily pinned in a bookmark 😬 ). Instead, you have elected to use depends_on which is again another well known issue and I just tend to use hashicorp/terraform#30340 to refer folks to (I don't believe there is an open issue like 30937 where the maintainers acknowledge this as a bug or something that they intend to fix - hence why I am reluctant to support workarounds for this behavior. You should just not use this behavior, period).

Here is a corrected example of the reproduction you provided - it does not suffer from any of the issues you have described:

provider "aws" {
  region = local.region
}

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

locals {
  cluster_name = "example"

  region        = "us-east-1"
  policy_prefix = "arn:aws:iam::aws:policy"

  azs      = ["us-east-1a", "us-east-1f"]
  vpc_cidr = "10.0.0.0/16"
}

################################################################################
# EKS Module
################################################################################

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 19.12"

  cluster_name                   = local.cluster_name
  cluster_version                = "1.24"
  cluster_endpoint_public_access = true
  create_node_security_group     = true

  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  create_aws_auth_configmap = true
  manage_aws_auth_configmap = true
  aws_auth_roles = [
    {
      rolearn  = aws_iam_role.nodes.arn
      username = "system:node:{{EC2PrivateDNSName}}"
      groups = [
        "system:bootstrappers",
        "system:nodes",
      ]
    },
  ]

  enable_irsa               = false
  cluster_encryption_config = {}

  # Self Managed Node Groups
  self_managed_node_group_defaults = {
    create_iam_instance_profile = false
    iam_instance_profile_arn    = aws_iam_instance_profile.nodes.arn

    suspended_processes = ["AZRebalance"]
    block_device_mappings = {
      xvda = {
        device_name = "/dev/xvda"
        ebs = {
          volume_size = 75
          volume_type = "gp3"
        }
      }
    }

    subnet_ids = module.vpc.public_subnets
    network_interfaces = [
      {
        associate_public_ip_address = true
        delete_on_termination       = true
      }
    ]
    autoscaling_group_tags = {
      "k8s.io/cluster-autoscaler/enabled" : true,
      "k8s.io/cluster-autoscaler/${local.cluster_name}" : "owned",
    }
  }

  self_managed_node_groups = {
    spot = {
      name = "spot-ig"

      min_size     = 0
      max_size     = 7
      desired_size = 0

      instance_type        = "m6i.large"
      bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
    }
    on-demand = {
      name = "on-demand-ig"

      min_size     = 1
      max_size     = 3
      desired_size = 1

      instance_type        = "m6i.large"
      bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=on-demand'"
    }
  }

  tags = {
    Terraform   = "true"
    Environment = "test"
  }
}

################################################################################
# Supporting Resource
################################################################################

data "aws_iam_policy_document" "node_assume_policy" {
  statement {
    sid     = "EKSWorkersAssumeRole"
    actions = ["sts:AssumeRole"]

    principals {
      type = "Service"
      identifiers = [
        "ec2.amazonaws.com",
        "ssm.amazonaws.com",
      ]
    }
  }
}

resource "aws_iam_role" "nodes" {
  name               = "${local.cluster_name}-nodes"
  assume_role_policy = data.aws_iam_policy_document.node_assume_policy.json
}

resource "aws_iam_instance_profile" "nodes" {
  name = "${local.cluster_name}-nodes-instance-profile"
  role = aws_iam_role.nodes.name
}

resource "aws_iam_role_policy_attachment" "ssm" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonSSMManagedInstanceCore"
}

resource "aws_iam_role_policy_attachment" "ecr_read_only" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEC2ContainerRegistryReadOnly"
}

resource "aws_iam_role_policy_attachment" "cni" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEKS_CNI_Policy"
}

resource "aws_iam_role_policy_attachment" "worker" {
  role       = aws_iam_role.nodes.name
  policy_arn = "${local.policy_prefix}/AmazonEKSWorkerNodePolicy"
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 3.0"

  name = local.cluster_name
  cidr = local.vpc_cidr

  azs             = local.azs
  private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
  public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
  intra_subnets   = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]

  enable_dns_hostnames = true
  enable_dns_support   = true
  enable_ipv6          = true

  enable_flow_log                      = true
  create_flow_log_cloudwatch_iam_role  = true
  create_flow_log_cloudwatch_log_group = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }

  tags = {
    Terraform   = "true"
    Environment = "test"
  }
}

@so-test
Copy link

so-test commented Apr 28, 2023

@bryantbiggs Thanks, I understand this issue! I'll search for any ways without depends_on.

@atamgp
Copy link

atamgp commented Apr 28, 2023

Thanks for the explanation.

I understand the depends-on part. It will make a depend_on entry for every sub resource eventually creating issues with apply-time evaluations of mostly data-sources.

I converted my code from:

resource "kubectl_manifest" "eniconfigs" {
  ....
}

module "eks_managed_node_group"
  source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git//modules/eks-managed-node-group?ref=v19.5.1"

  cluster_endpoint       = module.eks.cluster_endpoint
  cluster_auth_base64    = module.eks.cluster_certificate_authority_data
  ...
  depends_on = [
     kubectl_manifest.eniconfigs
   ]

to

module "eks_managed_node_group"
  source = ...

  cluster_endpoint       = module.eks.cluster_endpoint
  cluster_auth_base64    = module.eks.cluster_certificate_authority_data
  ...
    tags = {
    "eniconfigs_md5" = md5(kubectl_manifest.eniconfigs[0].yaml_body_parsed)
  }

Having said that, and I do understand the craziness of maintaining this across mentioned issues, from an engineering perspective if an option is explicitly disabled ( create_iam_role= false, iam_role_attach_cni_policy=false) preferably the underlying resources should not be evaluated/looped through and internally cause issues. But the bigger picture here may play a role.

@bryantbiggs
Copy link
Member

closing out issue for now - please let me know if there is anything left unanswered

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 17, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants