Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform 0.8.2-0.8.4 provider inheritance is failing for deeply nested modules #11282

Closed
erickt opened this issue Jan 19, 2017 · 7 comments
Closed
Assignees
Milestone

Comments

@erickt
Copy link

erickt commented Jan 19, 2017

In terraform 0.8.2 through 0.8.4, the bug fixed in #6186 seems to have resurfaced, which is impacting at least the aws and the datadog providers. I've reduced it down to the following set of files:

main.tf:

provider "aws" {
    region = "us-west-2"
}

provider "datadog" {
    api_key = "11111111111111111111111111111111"
    app_key = "1111111111111111111111111111111111111111"
}

module "module-a" { source = "module-a" }

module-a/main.tf:

module "module-b" { source = "../module-b" }
// module "module-c" { source = "../module-c" }

module-b/main.tf:

module "module-c" { source = "../module-c" }

module-c/main.tf:

data "aws_ami" "nat_ami" {
  most_recent = true
  executable_users = ["self"]
  filter {
    name = "owner-alias"
    values = ["amazon"]
  }
  filter {
    name = "name"
    values = ["amzn-ami-vpc-nat*"]
  }
  name_regex = "^myami-\\d{3}"
  owners = ["self"]
}

resource "datadog_monitor" "monitor" {
    name = ""
    type = "metric alert"
    query = ""
    message = ""
}

When run with terraform plan -input=false, it errors out with:

Errors:

  * module.module-a.module.module-b.module.module-c.provider.datadog: "api_key": required field is not set
  * module.module-a.module.module-b.module.module-c.provider.datadog: "app_key": required field is not set
  * module.module-a.module.module-b.module.module-c.provider.aws: "region": required field is not set

Interestingly enough, if you uncomment the line in module-a/main.tf, you'll get the correct error about these credentials being incorrect:

Error refreshing state: 4 error(s) occurred:

* No valid credential sources found for Datadog Provider. Please see https://terraform.io/docs/providers/datadog/index.html for more information on providing credentials for the Datadog Provider
* No valid credential sources found for Datadog Provider. Please see https://terraform.io/docs/providers/datadog/index.html for more information on providing credentials for the Datadog Provider
* No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider
* No valid credential sources found for AWS Provider.
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider
@mitchellh
Copy link
Contributor

Thanks for pointing this out. This will actually be fixed by #11373 and #11341.

The issue is that its fixed in the "new" graphs but validate, input, refresh are still using the "legacy" graph construction which had this issue. To verify this, I also pushed a test case on plan which isolates the operation to only plan (therefore bypassing refresh, validate, etc. legacy graphs for now): 290ad37

@mitchellh mitchellh self-assigned this Jan 24, 2017
@mitchellh mitchellh added this to the Terraform 0.9 milestone Jan 24, 2017
@mitchellh
Copy link
Contributor

This should work now on master.

@ronaldtse
Copy link

For the record, I am currently seeing this issue with a set of AWS resources that works with 0.8.1 but gives the "No valid credentials" message on versions 0.8.2-0.8.5. My credentials were correct in this case.

* No valid credential sources found for AWS Provider
  Please see https://terraform.io/docs/providers/aws/index.html for more information on
  providing credentials for the AWS Provider
(repeated message from different resources) 

Hopefully the newer 0.9.0 won't see this issue.

@mitchellh
Copy link
Contributor

@ronaldtse AHH! Every time we fix an issue related to this we add new tests, but never remove old ones, so we shouldn't be introducing regressions. These must be edge cases we're not testing for [obviously]. If you can get a simplified repro for us to run please open a new issue and I'll MAKE SURE its resolved. :) (until the next untested case is found shakes fist in air)

@ronaldtse
Copy link

@mitchellh Didn't quite have the time to work on the repro, but I'm happy to report back that 0.8.6 has resolved this issue. Thank you for the fix! 👍

@mitchellh
Copy link
Contributor

Great!

@ghost
Copy link

ghost commented Apr 17, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 17, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants