Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nested Module Bug #3114

Closed
meylor opened this issue Aug 28, 2015 · 8 comments · Fixed by #5022
Closed

Nested Module Bug #3114

meylor opened this issue Aug 28, 2015 · 8 comments · Fixed by #5022

Comments

@meylor
Copy link

meylor commented Aug 28, 2015

It appears as though there's a bug with nested modules. I've detailed the steps to reproduce and the errors that I'm seeing in https://github.com/meylor/terraform-nested-module-bug/blob/master/README.md

@meylor meylor changed the title commenting out a module with Provider/Provisioner 'xxxxxx' already initialized Nested Module Bug Aug 31, 2015
@meylor
Copy link
Author

meylor commented Aug 31, 2015

terraform-nested-module-bug

Overview

I believe this is a bug involving nested modules in Terraform ( https://www.terraform.io/docs/modules/create.html ). I've detailed the error that I'm seeing and the steps to reproduce the error below.

Steps to reproduce

  1. Clone https://github.com/meylor/terraform-nested-module-bug.git
  2. Edit terraform-nested-module-bug/terraform.tfvars to add AWS keys.
  3. Make sure that map.tf isn't using overlapping address space as your current infrastructure
  4. Set your AWS_REGION in an environment variable export AWS_REGION=us-west-1
  5. Run terraform get to get the modules
  6. Run terraform plan to see the resources that will be generated
  7. Run terraform apply to generate the resources.
    ** This will call a .tf state that will call a parent module, then a child module
  8. The resources will be provisioned correctly
  9. in terraform-nested-module-bug/test.tf comment out the following section:
#module "test" {
#  source = "./modules/parent"
#  vpc_id = "${aws_vpc.instance.id}"
#  env = "${var.env}"
#}
  1. Run terraform plan to see the same error that I'm seeing below.
meylor@meylor:~/terraform-nested-module-bug$ terraform plan
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * 14 error(s) occurred:

* Provider 'aws' already initialized
* Provisioner 'remote-exec' already initialized
* Provider 'aws' already initialized
* Provisioner 'remote-exec' already initialized
* Provisioner 'remote-exec' already initialized
* Provisioner 'file' already initialized
* Provisioner 'file' already initialized
* Provisioner 'local-exec' already initialized
* Provisioner 'local-exec' already initialized
* Provisioner 'chef' already initialized
* Provisioner 'chef' already initialized
* Provisioner 'chef' already initialized
* Provisioner 'file' already initialized
* Provisioner 'local-exec' already initialized

@phinze
Copy link
Contributor

phinze commented Sep 2, 2015

Repro'd locally thanks to your solid example repository. 👍

Looks like a problem with graph nodes not pruning correctly in the orphaned module case. Tagged and we'll get this fixed up.

@brendangibat
Copy link

So I ran in to the same issue. I was using an s3 remote state and nothing seemed to help - at one point I had nothing left in my tf file at all, had my remote state deleted, and my local .terraform dir removed and still the error persisted. I do not know if it was caused by some temp cache but renaming the terraform directory so that there would be no s3 cache or local cache that could match up on names allowed me to move past the same error.

@prozach
Copy link

prozach commented Dec 9, 2015

I believe I'm also running into this issue on 0.6.8. Any idea on timeline here?

@apparentlymart
Copy link
Contributor

This looks like what I reported in #4218. If so, the workaround I've been using so far is to downgrade to 0.6.7.

@apparentlymart
Copy link
Contributor

I've not delved into this yet but from looking at the discussion I think I have an idea of what the issue is here:

Terraform depends on the current config to instantiate providers in order to refresh objects that are in the state... this has tripped me up before when I've dropped a resource along with its provider block, only to have to temporarily add the provider block back again in order to plan the change to remove the resource. This makes sense, because you need to instantiate the provider in order to call into it to destroy the resource, but it's pretty counter-intuitive until you get to know a bit about Terraform's internals.

The above is merely annoying when the provider block is in the root module, but if the provider block is in a nested module and that whole module is removed, we're now confronted with the situation that we've "lost" all of the resources in that provider along with the necessary provider blocks to delete them.

Assuming I've got the right idea of the problem here, a few workarounds spring to mind:

  • Before removing the child module, delete all of the resource blocks from it and apply to get the resources deleted and the module's section of the state emptied out. Then delete the module itself. This could be done by editing directly the files in .terraform/modules, to avoid having to make changes that might impact other callers of the same module.
  • Manually delete the resources from the module (using tools outside of Terraform) and then hack it out of the state file.

Neither is great.

I'm more concerned here that this going to be pretty hard to resolve since it's a pretty fundamental limitation in Terraform's architecture that provider configurations must live long enough to destroy the resources that result from them, and that's pretty hard to achieve when modules (which are often used as "black box" abstractions, added and removed as a single unit) are instantiating their own providers. As far as I can tell, the only bulletproof resolution would be to put the provider configuration in the state file, but of course that could end up including secret credentials. I'm reminded of #516, and the comment I left over there in particular.

@apparentlymart
Copy link
Contributor

I just hit my issue again so finally got the opportunity to delve in and confirm that it is indeed this bug, and that downgrading to 0.6.7 didn't actually solve it... I must've change something else at the same time that created the illusion of that.

In my case I had a subtree of modules that instantiate their own providers, and the root module was deleted from the config leaving five modules in the state without the corresponding provider configurations.

My workaround, for those who were asking for one, was to manually delete the affected modules from the state file after manually deleting all of the represented resources using the admin consoles.

I didn't try it, but I expect another way to do this would've been to re-insert the module declaration, run terraform get to install it, and then edit the module's config in .terraform/modules to still have the provider blocks but remove all of the resources. Then terraform plan should successfully produce a deletion diff for all of the resources which can be applied before ultimately re-removing the module block in the root module.

phinze added a commit that referenced this issue Feb 5, 2016
Currently failing, but going to move this commit around to vet fixes.

Confirmed that #4877 fixes the provisioner half of this!
phinze added a commit that referenced this issue Feb 5, 2016
Context:

As part of building up a Plan, Terraform needs to detect "orphaned"
resources--resources which are present in the state but not in the
config. This happens when config for those resources is removed by the
user, making it Terraform's responsibility to destroy them.

Both state and config are organized by Module into a logical tree, so
the process of finding orphans involves checking for orphaned Resources
in the current module and for orphaned Modules, which themselves will
have all their Resources marked as orphans.

Bug:

In #3114 a problem was exposed where, given a module tree that looked
like this:

```
root
 |
 +-- parent (empty, except for sub-modules)
       |
       +-- child1 (1 resource)
       |
       +-- child2 (1 resource)
```

If `parent` was removed, a bunch of error messages would occur during
the plan. The root cause of this was duplicate orphans appearing for the
resources in child1 and child2.

Fix:

This turned out to be a bug in orphaned module detection. When looking
for deeply nested orphaned modules, root.parent was getting added twice
as an orphaned module to the graph.

Here, we add an additional check to prevent a double add, which
addresses this scenario properly.

Fixes #3114 (the Provisioner side of it was fixed in #4877)
phinze added a commit that referenced this issue Feb 9, 2016
Context:

As part of building up a Plan, Terraform needs to detect "orphaned"
resources--resources which are present in the state but not in the
config. This happens when config for those resources is removed by the
user, making it Terraform's responsibility to destroy them.

Both state and config are organized by Module into a logical tree, so
the process of finding orphans involves checking for orphaned Resources
in the current module and for orphaned Modules, which themselves will
have all their Resources marked as orphans.

Bug:

In #3114 a problem was exposed where, given a module tree that looked
like this:

```
root
 |
 +-- parent (empty, except for sub-modules)
       |
       +-- child1 (1 resource)
       |
       +-- child2 (1 resource)
```

If `parent` was removed, a bunch of error messages would occur during
the plan. The root cause of this was duplicate orphans appearing for the
resources in child1 and child2.

Fix:

This turned out to be a bug in orphaned module detection. When looking
for deeply nested orphaned modules, root.parent was getting added twice
as an orphaned module to the graph.

Here, we add an additional check to prevent a double add, which
addresses this scenario properly.

Fixes #3114 (the Provisioner side of it was fixed in #4877)
joshmyers pushed a commit to joshmyers/terraform that referenced this issue Feb 18, 2016
Context:

As part of building up a Plan, Terraform needs to detect "orphaned"
resources--resources which are present in the state but not in the
config. This happens when config for those resources is removed by the
user, making it Terraform's responsibility to destroy them.

Both state and config are organized by Module into a logical tree, so
the process of finding orphans involves checking for orphaned Resources
in the current module and for orphaned Modules, which themselves will
have all their Resources marked as orphans.

Bug:

In hashicorp#3114 a problem was exposed where, given a module tree that looked
like this:

```
root
 |
 +-- parent (empty, except for sub-modules)
       |
       +-- child1 (1 resource)
       |
       +-- child2 (1 resource)
```

If `parent` was removed, a bunch of error messages would occur during
the plan. The root cause of this was duplicate orphans appearing for the
resources in child1 and child2.

Fix:

This turned out to be a bug in orphaned module detection. When looking
for deeply nested orphaned modules, root.parent was getting added twice
as an orphaned module to the graph.

Here, we add an additional check to prevent a double add, which
addresses this scenario properly.

Fixes hashicorp#3114 (the Provisioner side of it was fixed in hashicorp#4877)
bigkraig pushed a commit to bigkraig/terraform that referenced this issue Mar 1, 2016
Context:

As part of building up a Plan, Terraform needs to detect "orphaned"
resources--resources which are present in the state but not in the
config. This happens when config for those resources is removed by the
user, making it Terraform's responsibility to destroy them.

Both state and config are organized by Module into a logical tree, so
the process of finding orphans involves checking for orphaned Resources
in the current module and for orphaned Modules, which themselves will
have all their Resources marked as orphans.

Bug:

In hashicorp#3114 a problem was exposed where, given a module tree that looked
like this:

```
root
 |
 +-- parent (empty, except for sub-modules)
       |
       +-- child1 (1 resource)
       |
       +-- child2 (1 resource)
```

If `parent` was removed, a bunch of error messages would occur during
the plan. The root cause of this was duplicate orphans appearing for the
resources in child1 and child2.

Fix:

This turned out to be a bug in orphaned module detection. When looking
for deeply nested orphaned modules, root.parent was getting added twice
as an orphaned module to the graph.

Here, we add an additional check to prevent a double add, which
addresses this scenario properly.

Fixes hashicorp#3114 (the Provisioner side of it was fixed in hashicorp#4877)
@ghost
Copy link

ghost commented Apr 27, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants