-
-
Notifications
You must be signed in to change notification settings - Fork 630
fix: Ensure var.region is passed through aws_region data source
#329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Given var.region can be null, add another try() to check for a provided value before falling back to current region.
|
The examples do not seem to exercise the enhanced region support in the v6 provider, so I locally modified the provider "aws" {
region = local.region
}
locals {
# The region we are executing Terraform against.
region = "us-east-1"
# The region we are building ECS resources in.
target_region = "eu-west-1"
name = "ex-${basename(path.cwd)}"
vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
container_name = "ecsdemo-frontend"
container_port = 3000
tags = {
Name = local.name
Example = local.name
Repository = "https://github.com/terraform-aws-modules/terraform-aws-ecs"
}
}
################################################################################
# Cluster
################################################################################
module "ecs_cluster" {
source = "../../modules/cluster"
region = local.target_region
name = local.name
# Capacity provider
default_capacity_provider_strategy = {
FARGATE = {
weight = 50
base = 20
}
FARGATE_SPOT = {
weight = 50
}
}
tags = local.tags
}
################################################################################
# Service
################################################################################
module "ecs_service" {
source = "../../modules/service"
region = local.target_region
name = local.name
cluster_arn = module.ecs_cluster.arn
cpu = 1024
memory = 4096
# Enables ECS Exec
enable_execute_command = true
# Container definition(s)
container_definitions = {
(local.container_name) = {
cpu = 512
memory = 1024
essential = true
image = "public.ecr.aws/aws-containers/ecsdemo-frontend:776fd50"
portMappings = [
{
name = local.container_name
containerPort = local.container_port
hostPort = local.container_port
protocol = "tcp"
}
]
# Example image used requires access to write to root filesystem
readonlyRootFilesystem = false
memoryReservation = 100
}
}
subnet_ids = module.vpc.private_subnets
security_group_egress_rules = {
all = {
ip_protocol = "-1"
cidr_ipv4 = "0.0.0.0/0"
}
}
tags = local.tags
}
#
# Setup VPC
#
# terraform-aws-modules/vpc does not fully support specifying the region so use
# the provider alias method to create a VPC in a different region.
#
provider "aws" {
alias = "eu-east-1"
region = local.target_region
}
data "aws_availability_zones" "available" {
provider = aws.eu-east-1
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 6.0"
providers = {
aws = aws.eu-east-1
}
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
enable_nat_gateway = true
single_nat_gateway = true
tags = local.tags
}Prior to the change in this PR the service fails to start with a |
|
I can add a cross-region example, I'm not sure how else to address a demo. |
|
ok thank you - curious, why use the region in that way instead of just setting it on the provider and let everything else inherit it? |
var.region is passed through aws_region data source
## [6.1.3](v6.1.2...v6.1.3) (2025-07-31) ### Bug Fixes * Ensure `var.region` is passed through `aws_region` data source ([#329](#329)) ([9a7f9b5](9a7f9b5))
|
This PR is included in version 6.1.3 🎉 |
We have resources spread across a few regions and there is enough interdependence to put them all in one terraform workspace. |
|
I would advise against doing that |
|
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
Update the
ECSTasksAssumeRolepolicy to use theregionvariable in the assume role condition. This is to configure the policy for the region the service is deployed to.Motivation and Context
When applying with a provider configured for
us-east-1and creating a cluster ineu-west-1the assume role policy is generated as follows:{ "Version": "2012-10-17", "Statement": [ { "Sid": "ECSTasksAssumeRole", "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:SourceAccount": "XXXXXXXXXXXX" }, "ArnLike": { "aws:SourceArn": "arn:aws:ecs:us-east-1:XXXXXXXXXXXX:*" } } } ] }Note the
ArnLikeisarn:aws:ecs:us-east-1:XXXXXXXXXXXXwhen it should match the region of the created service, e.g.arn:aws:ecs:eu-west-1:XXXXXXXXXXXXWith the incorrect region, the task deployment fails with the following (via service events in the AWS Console) :
Updating the
aws:SourceArnwith the correct region resolves the issue.Breaking Changes
None that I am aware of.
How Has This Been Tested?
examples/*to demonstrate and validate my change(s)examples/*projectspre-commit run -aon my pull request