Skip to content

Ec2LaunchTemplateInvalidConfiguration: User data was not a TOML format that could be processed. #1729

@PDQDakota

Description

@PDQDakota

Description

Please let me know if anything else is needed of me! I searched all issues, open and closed, and didn't find anything close to this so hopefully I didn't miss anything obvious.

I am creating a new EKS cluster in a new VPC. The VPC, subnetworks, and cluster are created successfully but the node group creation fails with the errors below:

First error

Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp [::1]:80: connectex: No connection could be made because the target machine actively refused it.

   with module.eks.kubernetes_config_map.aws_auth[0],
   on .terraform\modules\eks\aws_auth.tf line 63, in resource "kubernetes_config_map" "aws_auth":
   63: resource "kubernetes_config_map" "aws_auth" {

Second error

Error: error waiting for EKS Node Group (asdf-prod:asdf-20211222184844101800000011) to create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: 1 error occurred:
       * : Ec2LaunchTemplateInvalidConfiguration: User data was not a TOML format that could be processed.



   with module.eks.module.node_groups.aws_eks_node_group.workers["asdf"],
   on .terraform\modules\eks\modules\node_groups\main.tf line 1, in resource "aws_eks_node_group" "workers":
    1: resource "aws_eks_node_group" "workers" {

Versions

  • Terraform: 1.1.2
  • Provider(s):
    provider registry.terraform.io/hashicorp/aws v3.69.0
    provider registry.terraform.io/hashicorp/cloudinit v2.2.0
    provider registry.terraform.io/hashicorp/kubernetes v2.7.1
    provider registry.terraform.io/hashicorp/local v2.1.0
    provider registry.terraform.io/terraform-aws-modules/http v2.4.1
  • Module:
    EKS 17.24.0

Reproduction

Steps to reproduce the behavior:

I am not using workspaces, I have cleared the local cache.

I run terraform apply and wait, My config is below.

Code Snippet to Reproduce

data "aws_eks_cluster" "asdf_cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "asdf_cluster" {
  name = module.eks.cluster_id
}

data "aws_availability_zones" "available" {}

locals {
  cluster_name = "asdf-prod"
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.11.0"

  name                 = "asdf-prod"
  cidr                 = "10.16.0.0/16"
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = ["10.16.1.0/24", "10.16.2.0/24", "10.16.3.0/24"]
  public_subnets       = ["10.16.4.0/24", "10.16.5.0/24", "10.16.6.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = "1"
  }

  tags = {
    terraform = "true"
    prod      = "true"
    service   = "eks"
    cluster   = "${local.cluster_name}"
  }
}

resource "aws_kms_key" "eks_enc_key" {
  description                        = "Key used by the asdf-prod EKS cluster."
  deletion_window_in_days            = 30
  key_usage                          = "ENCRYPT_DECRYPT"
  customer_master_key_spec           = "SYMMETRIC_DEFAULT"
  is_enabled                         = true
  enable_key_rotation                = false
  bypass_policy_lockout_safety_check = false
  multi_region                       = false

  tags = {
    terraform = "true"
    prod      = "true"
    service   = "eks"
    cluster   = "${local.cluster_name}"
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "17.24.0"

  cluster_name    = local.cluster_name
  cluster_version = "1.21"
  subnets         = module.vpc.private_subnets
  vpc_id          = module.vpc.vpc_id

  cluster_log_retention_in_days = 90

  cluster_endpoint_public_access_cidrs = [
    # PDQ Sumo
    "198.91.48.6/32",
    # PDQ Comcast
    "50.220.231.90/32",
  ]

  tags = {
    terraform = "true"
    prod      = "true"
  }

  node_groups = {
    asdf = {
      instance_types = [
        "c6g.2xlarge", # c6g.2xlarge is 8 vCPU and 16 GB
      ]
      name_prefix            = "asdf-"
      ami_type               = "BOTTLEROCKET_ARM_64"
      capacity_type          = "ON_DEMAND"
      disk_size              = "120" # in gigabytes
      create_launch_template = true
      disk_encrypted         = true
      disk_kms_key_id        = aws_kms_key.eks_enc_key.arn

      desired_capacity = 4
      max_capacity     = 10
      min_capacity     = 4
      update_config = {
        max_unavailable_percentage = 50
      }
    }
  }
}

Expected behavior

The cluster and node group are created and ready for workloads to be run.

Actual behavior

The cluster is created but the node group fails to create.

Terminal Output Screenshot(s)

image

Additional context

I'm running this on Windows locally and can pivot to a Linux based machine if that'll help.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions