Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Provider in v0.5.3 - Credentials can't be used in modules #2445

Closed
8interactive opened this issue Jun 23, 2015 · 19 comments
Closed

AWS Provider in v0.5.3 - Credentials can't be used in modules #2445

8interactive opened this issue Jun 23, 2015 · 19 comments

Comments

@8interactive
Copy link

I believe this is related to #1380

Basically if a top-level module declares and configures the AWS provider and does not also directly create an AWS resource, terraform gets confused about credentials not being set properly. Not sure if I am doing something wrong here or not. It does feel like a bug.

My directory structure is as follows:

.
├── main.tf
├── modules
│   ├── a
│   │   ├── main.tf
│   │   └── variables.tf
│   └── b
│       ├── main.tf
│       └── variables.tf
├── terraform.tfvars
└── variables.tf

main.tf:

module "base" {
    source = "modules/a"
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
    vpc_id = "${var.vpc_id}"
    vpc_name = "${var.vpc_name}"
    cidr_blocks = "${var.cidr_blocks}"
    zones = "${var.zones}"
}

variables.tf:

variable "access_key" {}
variable "secret_key" {}
variable "vpc_name" {}
variable "vpc_id" {}
variable "zones" {
    default = "eu-west-1a,eu-west-1b"
}
variable "cidr_blocks" {
    default = "10.0.10.0/24,10.0.11.0/24"
}

terraform.tfvars:

access_key = "xx"
secret_key = "x"
vpc_id = "1"
vpc_name = "test"

a/main.tf:

provider "aws" {
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
    region = "${var.region}"
}

module "b" {
    source = "../b"
    vpc_id = "${var.vpc_id}"
    vpc_name = "${var.vpc_name}"
    cidr_blocks = "${var.cidr_blocks}"
    zones = "${var.zones}"
}

/*
resource "aws_security_group" "dmz" {
     name = "${var.vpc_name}-dmz"
     description = "Allows SSH, HTTP, and HTTPS access from internal EC2 instances through the NAT"
     vpc_id = "${var.vpc_id}"
}
*/

b/main.tf:

resource "aws_subnet" "subnet" {
    vpc_id = "${var.vpc_id}"
    cidr_block = "${element(split(",",var.cidr_blocks),count.index)}"
    availability_zone = "${element(split(",",var.zones),count.index)}"
    map_public_ip_on_launch = true
    tags {
       Name = "${var.vpc_name}-${element(split(",",var.zones),count.index)}-${count.index}"
    }
    count = "${length(split(",",var.zones))}"
}

output "cidr_blocks" {
    value = "${aws_subnet.subnet.cidr_block}"
}

output "ids" {
    value = "${join(",",aws_subnet.subnet.*.id)}"
}

both a/variables.tf and b/variables.tf:

variable cidr_blocks {}
variable zones {}
variable vpc_name {}
variable vpc_id {}
variable zones {}

when I run:

rm -rf .terraform/ && terraform get && terraform plan -var-file=./terraform.tfvars

I received as output:

Get: file:///Users/timothykimball/aire/aire_core/terraform/network/test_case/modules/a
Get: file:///Users/timothykimball/aire/aire_core/terraform/network/test_case/modules/b
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * module.base.module.b.provider.aws: "access_key": required field is not set
  * module.base.module.b.provider.aws: "secret_key": required field is not set
  * module.base.module.b.provider.aws: "region": required field is not set

If I uncomment the aws_security_group rule in a/main.tf I receive instead:

Get: file:///Users/timothykimball/aire/aire_core/terraform/network/test_case/modules/a
Get: file:///Users/timothykimball/aire/aire_core/terraform/network/test_case/modules/b
Refreshing Terraform state prior to plan...

<deleted text>

+ module.base
    1 resource(s)
+ module.base.b
    2 resource(s)
@mitchellh
Copy link
Contributor

This should be fixed now.

@mitchellh
Copy link
Contributor

Please let me know if it isn't. The PRs are out there to fix this!

@8interactive
Copy link
Author

@mitchellh - Confused as this is present in 0.5.3 which is the latest release yes?

How do I figure out which pull request to use?

@mitchellh
Copy link
Contributor

Should be fixed now!

@timothykimball
Copy link

Apologies - not sure I understand. What version of the application is the bug fixed in?

Absent a version, does that mean only in master?

Sorry!
Tim

@mitchellh
Copy link
Contributor

No problem Tim. It just means that is in Git, slated for the next release. We'll be releasing 0.6.0 soon, so that is the release it will be in.

@brikis98
Copy link
Contributor

brikis98 commented Nov 4, 2015

Was this fix available in v0.6.4? I just hit something that seems like an identical bug.

I have a top-level module called A that includes a module called B. Module B contains a single main.tf file that does not create any resources of its own, but just includes a bunch of other modules C, D, and E. When I run terraform plan, I get an error like this:

* module.B.module.C.provider.aws: "region": required field is not set
* module.B.module.D.provider.aws: "region": required field is not set
* module.B.module.E.provider.aws: "region": required field is not set

If I add any resource to Module B, the error goes away.

@Bochenski
Copy link

I'm seeing the same as @brikis98 in v0.6.6

@CommanderMoto
Copy link

bump re: v0.6.6 on Atlas ... except I can't make the error go away.
I tried adding the following to the module-aggregating modules (eg aws_network, as pinched from your atlas-examples repo):

resource "null_resource" "for_testing" {
}

and still, when I "terraform push", Atlas gives me (representative errors)

* module.aws_network.module.openvpn.provider.aws: "region": required field is not set
* module.aws_network.module.openvpn.provider.aws: "access_key": required field is not set
* module.aws_network.module.openvpn.provider.aws: "secret_key": required field is not set

@CommanderMoto
Copy link

Update: ... and when, in Atlas, I set the environment variables "AWS_ACCESS_KEY_ID" "AWS_SECRET_ACCESS_KEY" and "AWS_REGION" on the environment where I'm pushing my config, the errors all go away. So that explains why this has been working on my local machine, but not in Atlas.

Also, very likely explains why, when I had a mismatch between those environment variables and the access_key / secret_key (terraform) variables, all hell broke loose and terraform created some of my resources in the wrong AWS account. (I mitigated that problem by using the allowed_account_ids parameter to provider "aws"

@mrmichaeladavis
Copy link

I too am having this issue, using the stock atlas-examples/infrastructure project 01 with 0.6.6

@amotoohno, As a workaround, an ugly one, you can add the provider and region/access/secret into each main.tf of the module and it does work without using environment vars.

@pgporada
Copy link

I'm seeing this on v0.6.8 with region not being set. I'm building locally, not with Atlas.

  * module.network.module.ephemeral_subnet.provider.aws: "region": required field is not set
  * module.network.module.private_subnet.provider.aws: "region": required field is not set
  * module.network.module.public_subnet.provider.aws: "region": required field is not set
  * module.network.module.vpc.provider.aws: "region": required field is not set

Here is the structure of my project so far

.
├── modules
│   └── aws
│       └── network
│           ├── network.tf
│           ├── private_network
│           │   └── private_network.tf
│           ├── public_network
│           │   └── public_network.tf
│           ├── rds
│           │   └── rds.tf
│           └── vpc
│               └── vpc.tf
└── providers
    └── aws
        ├── Makefile
        ├── global
        │   └── global.tf
        ├── main.tf
        ├── prod
        │   └── prod.tfvars
        ├── qa
        │   └── qa.tfvars
        └── staging
               └── staging.tfvars

Main.tf

variable "region" { default = "us-east-1" }
variable "aws_access_key" { }
variable "aws_secret_key" { }
variable "env" { description = "What environment is this, ex: prod, staging, qa" }
variable "company" { }
variable "vpc_cidr" { description = "x.x.x.x/xx" }
variable "azs" { description = "Concatenated list of availability zones" }
variable "public_subnets" { }
variable "private_subnets" { }
variable "ephemeral_subnets" { }
variable "db_subnets" { }

provider "aws" {
    region      = "${var.region}"
    access_key  = "${var.aws_access_key}"
    secret_key  = "${var.aws_secret_key}"
}

module "network" {
    source              = "../../modules/aws/network"
    region              = "${var.region}"
    env                 = "${var.env}"
    vpc_cidr            = "${var.vpc_cidr}"
    azs                 = "${var.azs}"
    public_subnets      = "${var.public_subnets}"
    private_subnets     = "${var.private_subnets}"
    ephemeral_subnets   = "${var.ephemeral_subnets}"
    company             = "${var.company}"
}

Here is how I recreate the issue

cd providers/aws
rm -rf .terraform
terraform get -update=true
terraform plan -input=false -module-depth=-1 -var-file=qa/qa.tfvars

If I run a plan without -input=false, I get asked several times for the region. After typing out the region, the plan will work.

$ terraform plan -module-depth=-1 -var-file=qa/qa.tfvars
provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: us-east-1

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: us-east-1

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: us-east-1

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: us-east-1

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

Per @amotoohno's comment about setting environment variables locally, I can run the following and have a plan build successfully.

export AWS_REGION='us-east-1'

@brikis98
Copy link
Contributor

@mitchellh: Could you re-open this issue? I just hit it again on v0.6.8.

Here is a simple repro case. Create the following files & folders:

A.tf
modules
└ B
  └ B.tf
└ C
  └ C.tf

Contents of A.tf:

provider "aws" {
  access_key = "(my access key)"
  secret_key = "(my secret key)"
  region = "us-west-1"
}

module "B" {
  source = "./modules/B"
}

Contents of B.tf:

module "C" {
  source = "../C"
}

Contents of C.tf:

resource "aws_iam_policy" "dummy_policy" {
    name = "dummy_policy"
    description = "Any resource would work here"
    policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "ec2:Describe*"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
EOF
}

Run the following:

> terraform get -update && terraform plan -input=false
Get: file:///Users/brikis98/source/tmp/terraform-bug/modules/B (update)
Get: file:///Users/brikis98/source/tmp/terraform-bug/modules/C (update)
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * module.B.module.C.provider.aws: "region": required field is not set

@mgwilliams
Copy link

+1 I've just run into this as well in 0.6.11.

@PaulusTM
Copy link

PaulusTM commented Mar 3, 2016

+1 I've just noticed the exact same behavior in 0.6.12

@thegranddesign
Copy link

@mitchellh @phinze please reopen this issue. I'm hitting on the exact issue that many (specifically @pgporada described above

@phinze
Copy link
Contributor

phinze commented Mar 24, 2016

Hi @thegranddesign - sorry for the trouble! Is the issue you're seeing properly described in #4865? That's on my short list of issues to tackle soon.

@thegranddesign
Copy link

@phinze 100%! Thanks for the quick reply. I'll subscribe to that issue for updates. 😀

@ghost
Copy link

ghost commented Apr 27, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests