-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data source for current AWS partition breaks modules #1753
Comments
Same problem here |
what region is being used? just stating that the partition breaks doesn't help much - this is what this data source was intended for and its used in a similar fashion in other modules so if we could get more information maybe we can help track down why its not working for you all but works for others |
Hello, I'm just using the default "aws" partition in us-east-1. The lookup itself doesn't work when it's in a module, it works fine in the rest of my code but doesn't work in any modules. |
again, this doesn't tell us anything - the use of this data source is well known and used through our modules as well as within testing the AWS provider |
Is there anything specific I can provide that hasn't been provided yet in order to help figure out the issue? |
but I can see you edited above now to add us-east-1 we will leave this issue open since it seems others are 👍🏽 - but I would suggest filing a bug ticket with the upstream AWS provider (note - they will want a way to reproduce the error) |
Apologies, I added in the region when I re-read it properly after posting. I was trying to re-create my code with only relevant pieces to submit a bug ticket with the upstream provider, and the issue is caused by a depends_on that I have in the module. Taking the depends_on out fixes the issue! Thank you for time and for taking a look at this. |
Awesome - glad it's resolved! I've never seen that data source not work and it had me stumped 😅 |
@bryantbiggs would you kindly explain, why is that this code
results in
Is there any workaround to use the roles created just in the same module? |
@lure I suspect its due to hashicorp/terraform#4149 - could you try it with creating the policy first and then creating the cluster with the additional policy? I suspect this might pass without issue |
@bryantbiggs I am of the opinion that this is still an issue as it means that it is now no longer possible to use |
I would suggest filing an issue with the upstream Terraform repository for this - as far as I am aware, there are no restrictions on using a data source with a module |
The problem is that the result of the data source is used in a |
Removing depends_on worked for me initially, but on subsequent runs of 'terraform apply', the issue returned. |
@paulbsch are you using |
@jeroen-s I am using that, however, I just tried removing it and the issue still persists. Currently, I can't even run a 'terraform destroy' to remove my cluster as the issue shows up when doing that also. |
so after digging into this, it is in fact due to hashicorp/terraform#4149 and not the data source there are two ways that I found to work around this:
(truncated for brevity) module "eks" {
eks_managed_node_groups = {
...
}
}
resource "aws_iam_policy" "node_additional" {
name = "${local.name}-additional"
description = "Example usage of node additional policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:Describe*",
]
Effect = "Allow"
Resource = "*"
},
]
})
tags = local.tags
}
resource "aws_iam_role_policy_attachment" "additional" {
for_each = module.eks.eks_managed_node_groups
policy_arn = aws_iam_policy.node_additional.arn
role = each.value.iam_role_arn
} |
I've opened a PR to add a clear notification to the README regarding this issue https://github.com/bryantbiggs/terraform-aws-eks/tree/fix/remote-access-sgs#%E2%84%B9%EF%B8%8F-error-invalid-for_each-argument- |
Just to be clear, there are two separate problems discussed in this issue:
Both are due to hashicorp/terraform#4149. The first one has nothing to do with the data source and can be handled as @bryantbiggs showed above.The second one is what I think the original issue is about and what I was referring to with:
|
moreover wherever anything within a |
disregard my comment. |
Why is this ticket closed? This is still an issue. If you add depends_on = [whatever], still breaks the setup. |
Bump |
bump - i am running into this issue as well for module the above scenarios.
|
Can I suggest a solution? The offending data source "aws_partition" is simply return a string "aws" vs "aws-cn". It's not an error that this cannot be determined until apply. To work around, we can simply take "aws-cn" from a variable. |
Just ran into this when creating a new cluster from terraform config that worked on older versions of the eks module... This is broken in the module itself, and should be considered a bug and fixed. In my example I am supplying no additional iam policies. See the error below of
|
Could also return a |
Correct, but the user could pass in their own |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Please provide a clear and concise description of the issue you are encountering, your current setup, and what steps led up to the issue. If you can provide a reproduction, that will help tremendously.
Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Terraform v1.1.2
on linux_amd64
Reproduction
Steps to reproduce the behavior:
No Workspaces
I have cleared the local cache
steps to reproduce:
terraform init
terraform plan
Code Snippet to Reproduce
Expected behavior
The terraform plan should be successful
Actual behavior
Terraform plan fails with 3 versions of this error (one for the cluster and one per node group):
Terminal Output Screenshot(s)
Additional context
There is 1 error for the cluster, and one for each node group. The error is from this line of code:
policy_arn_prefix = "arn:${data.aws_partition.current.partition}:iam::aws:policy"
Changing it in .terraform/modules/eks/main.tf line 173 & .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 393 to
policy_arn_prefix = "arn:aws:iam::aws:policy"
will give a successful terraform plan. I've ran into this before and had to make the partition a variable and pass in the partition value fromdata "aws_partition" "current" {}
when creating other modules. For whatever reason it is not called during a plan if it is in a module.The text was updated successfully, but these errors were encountered: