-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Karpenter iam_policy_attachment fails when providing role name #2461
Comments
Working example in place of the current iam_policy_attachment and modeled after the EKS module's resource:
|
we will need a reproduction that we can deploy and displays the error you are seeing See issue template:
|
Updated @bryantbiggs |
Is that sufficient? |
unfortunately, no - we work off of vanilla Terraform to troubleshoot issues |
Json is directly supported through terraform cli and can be interacted with through regular commands. JSON is functionally no different than HCL and presents the same issue. Direct HCL can also be found in the closed issue I pointed to. |
Could you please try reproducing with the above code block @bryantbiggs via |
I'm having the same issue. Cluster:
Another terraform project with the separate state:
Last week I had the same issue for a day and then it disappeared after updating the aws provider version. Not sure if thats actually related since it returned. I hope this example helps reproduce the issue. |
By the way, I've found this issue here thats closed: When applying the changes locally everything works perfectly! |
I need something that is going to show there is an issue - so far that has not happened. Here is a stab at it based on what I see above, and it deploys just fine without any issue provider "aws" {
region = local.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
data "aws_availability_zones" "available" {}
data "aws_caller_identity" "current" {}
locals {
name = "karp-ex"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = {
Example = local.name
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}
################################################################################
# EKS Module
################################################################################
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.10"
cluster_name = local.name
cluster_version = "1.24"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
default = {}
}
tags = local.tags
}
################################################################################
# Karpenter Module
################################################################################
module "karpenter" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
version = "~> 19.10"
cluster_name = module.eks.cluster_name
irsa_oidc_provider_arn = module.eks.oidc_provider_arn
irsa_namespace_service_accounts = ["karpenter:karpenter"]
# Since Karpenter is running on an EKS Managed Node group,
# we can re-use the role that was created for the node group
create_iam_role = false
iam_role_arn = module.eks.eks_managed_node_groups["default"].iam_role_arn
}
################################################################################
# Supporting resources
################################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = local.tags
} |
I've got your point and I'm working on it. Would it be possible to give it a try using the blue prints as I shared earlier and using terraform data to lookup the managed node group? I suspect there may be something there. Thank you, I appreciate your help! |
I have faced the same error, and solved it with this #2337 (comment) and hashicorp/terraform#26383 (comment) by removing But it would be great to have this PR #2462 somehow accepted since im sure there will be a case where removing the |
Closing till a reproduction can be provided - I went out of my way to try to reproduce (see above) and I am unable to do so. Using |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
When a desired IAM role name is provided for creation and prefix disabled, the iam_policy_attachment fails with a circular dependency.
new karpenter policy fails #2306 should not have been closed as it's still broken
Versions
v19.7.0 and v18.31.2 where the issue was closed
v1.3.7
provider registry.terraform.io/hashicorp/aws v4.53.0
Reproduction Code [Required]
This is cdktf json output that has been trimmed/redacted but can still be interacted with directly from terraform cli.
Steps to reproduce the behavior:
Attempt a plan on the above block
Expected behavior
Successful plan
Actual behavior
Plan fails with errors on
for_each
in iam_role_policy_attachment resourceTerminal Output Screenshot(s)
Additional context
Other resources handle this differently as mentioned in #2306 (comment)
The text was updated successfully, but these errors were encountered: