Skip to content

Security Groups are incorrectly configured #1616

@jdziat

Description

@jdziat

Description

If you use node_groups without a launch_template and if you do not have create_launch template set the module will use default settings. This causes an issue when you create additional node_groups that do use launch templates or have create_launch_template configured. The top level security group does not allow all traffic between itself and the managed_node_group security group.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Terraform: v1.0.7
  • Provider(s): aws 3.43.0
  • Module: 17.20.0

Reproduction

Steps to reproduce the behavior:

  • create a node_group
  • do not configure a launch_template
  • do not set create_launch_template to true
  • New nodes will be assigned to the cluster security group which doesn't have the appropriate access for node groups within it.

Code Snippet to Reproduce

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "17.20.0"
  cluster_name    = local.cluster_name
  cluster_version = "1.19"
  subnets         = concat(var.public_subnets, var.private_subnets)
  # Enable OIDC
  enable_irsa = true
  tags        = local.tags
  vpc_id      = var.vpc

  node_groups = {
    ng-ami-one = {
      desired_capacity        = 1
      max_capacity            = 10
      min_capacity            = 1
      subnets                 = var.private_subnets
      instance_types          = ["r5.xlarge"]
      k8s_labels = {
        environment  = var.environment
        network      = "private"
      }
      additional_tags = local.k8s_tags
    }
    ng-ami-two = {
      desired_capacity        = 1
      max_capacity            = 10
      min_capacity            = 1
      subnets                 = var.private_subnets
      instance_types          = ["r5.xlarge"]
      create_launch_template = true
      k8s_labels = {
        environment  = var.environment
        network      = "private"
      }
      additional_tags = local.k8s_tags
    }
  ###
  # Auth Configuration
  ###
  map_roles    = var.map_roles
  map_users    = var.map_users
  map_accounts = var.map_accounts
  # Disable kubeconfig output
  write_kubeconfig = false
  # Create security group rules to allow communication between pods on workers and pods in managed node groups.
  # Set this to true if you have AWS-Managed node groups and Self-Managed worker groups.
  # See https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1089
  worker_create_security_group = true
}

Expected behavior

That if no launch_template is specified and if create_launch_template is false that it places it in the appropriate group or errors out.

Actual behavior

Successfully creates the node_group but it will randomly fail when trying to communicate with node_group two. Also node_group two will not be able to communicate with node_group one.

Terminal Output Screenshot(s)

Additional context

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions