Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify wording when lifecycle.prevent_destroy is set and terraform destroy is run #2473

Closed
nathanielks opened this issue Jun 24, 2015 · 18 comments · Fixed by #2992
Closed

Comments

@nathanielks
Copy link
Contributor

When running terraform destroy and a resource has lifecycle.prevent_destroy set to true, we're encouraged to either disable prevent_destroy, or change your config so the plan does not destroy this resource which is set in terraform/eval_check_prevent_destroy.go.

Would it be possible to clarify the latter portion a bit more? Not really sure what I could change in this context!

@mitchellh
Copy link
Contributor

Hm, not sure how to clarify it further. It isn't something that is always possible but basically: just don't do the change that requires a destroy of that instance. If you're changing the AMI, for example, then don't.

It isn't a "solution" in that it gets you what you want AND avoids the destroy. It is a solution to get Terraform to continue.

@phinze
Copy link
Contributor

phinze commented Jun 25, 2015

@mitchellh I think the wording / behavior question is around the specific scenario of prevent_destroy and terraform destroy.

  • should terraform destroy -force override prevent_destroy?
  • alternatively, should the message include mention of the fact that terraform destroy is impossible when configs include prevent_destroy on any resource?

@phinze phinze reopened this Jun 25, 2015
@nathanielks
Copy link
Contributor Author

@phinze is correct. That's what I'm looking for :)


Sent from Mailbox

On Thu, Jun 25, 2015 at 12:54 PM, Paul Hinze [email protected]
wrote:

@mitchellh I think the wording / behavior question is around the specific scenario of prevent_destroy and terraform destroy.

  • should terraform destroy -force override prevent_destroy?

* alternatively, should the message include mention of the fact that terraform destroy is impossible when configs include prevent_destroy on any resource?

Reply to this email directly or view it on GitHub:
#2473 (comment)

@mitchellh
Copy link
Contributor

Ah got it. I don't think -force should probably override it? For destroy you're specifically asking to just kill everything.

@nathanielks
Copy link
Contributor Author

Well, my main confusion is that if I am destroying and I've set a resource to prevent destroy, what other methods are available to prevent the resource from being destroyed other than lifecycle.prevent_destroy? I'm thinking specifically of this portion of the message: change your config so the plan does not destroy this resource.

@phinze
Copy link
Contributor

phinze commented Jun 26, 2015

Marked as "documentation" even though this discussion is around an error message. Seemed closest. 😀

@nathanielks do you have some example wording for the error that you feel might be clearer?

@nathanielks
Copy link
Contributor Author

@phinze let me knock some thoughts around and I'll get back with you!

@nathanielks
Copy link
Contributor Author

%s: the plan would destroy this resource, but it currently has lifecycle.prevent_destroy set to true. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or adjust the scope of the plan using the -target flag.

Are there any methods available that would allow a destroy plan/apply to continue other than the ones I've presented?

@shanielh
Copy link
Contributor

shanielh commented Aug 3, 2015

I prefer that terraform will allow not to destroy the resources that marked with prevent_destroy when running terraform destroy

phinze added a commit that referenced this issue Aug 12, 2015
phinze added a commit that referenced this issue Aug 13, 2015
phinze added a commit that referenced this issue Aug 13, 2015
@wjessop
Copy link

wjessop commented Jun 23, 2017

I know this is closed, but I'd love to get a better error here. In particular why the plan would destroy this resource.

I've just cd'ed into the directory, run terraform plan and I get the message:

Error running plan: 1 error(s) occurred:

  • aws_db_instance.db: the plan would destroy this resource, but it currently has lifecycle.prevent_destroy set to true. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or adjust the scope of the plan using the -target flag.

I have almost nothing to work from, git shows no local changes:

$ git diff
$ 

A list of reasons why the resource would be destroyed would help a lot here.

@derFunk
Copy link

derFunk commented Jul 7, 2017

I just came across the same thing as @wjessop , and it scared the hell out of me.

Does the plan would destroy this resource mean that terraform plan would actually destroy a resource on AWS? Or just that the generated plan would include destroying a resource, which is fine and expected.

But the plan should be generated anyways, regardless of the lifecycle.prevent_destroy setting. Terraform should just prevent a destroy when issuing terraform apply or terraform destroy.

@apparentlymart
Copy link
Contributor

Hi all,

I think I see where the ambiguity is confusing here: Terraform is trying to say "I can't produce a plan that contains a destroy for this because the config tells me not to", but due to some imprecise wording it sounds more like it's saying "I can't create a plan becase creating the plan would destroy the resource". Is that a good summary of the confusion here?

terraform plan is failing here not because terraform plan would destroy the resource, but because any plan it would hypothetically produce would be un-applyable (would fail immediately due to the presence of prevent_destroy), and Terraform prefers where possible to fail at plan time rather than apply time.

Given the age of this issue, at this point it's probably better for one of you to open a fresh issue describing the problem you've run into so we can dig into it some more. It feels like what we're discussing here is a bit of an off-shoot issue of this one, worthy of discussion in its own right.

@derFunk
Copy link

derFunk commented Jul 10, 2017

Good summary of the confusion here 👍.

@wjessop
Copy link

wjessop commented Jul 10, 2017

@apparentlymart FWIW I've not experienced any confusion that the planning stage itself would destroy the resource, the confusion for me is not being able to work out why executing the plan would destroy the resource. I think actually displaying the plan, even if the "plan" would destroy the resource might clear that up.

@apparentlymart
Copy link
Contributor

Thanks for the clarification, @wjessop. Seems pretty reasonable for us to show the failed plan; I'm not sure how easy it will be since I'm not sure off the top of my head where in Terraform this is handled, but I'm imagining the result being something like this:

- aws_instance.important (blocked by prevent_destroy)

And then, of course, a similar error message would appear after the diff to say that the plan can't be created.

Is that the sort of thing you were thinking of?

@wjessop
Copy link

wjessop commented Jul 10, 2017

@apparentlymart That sounds great!

@apparentlymart
Copy link
Contributor

Cool! Given the age of this issue, let's start over with a fresh issue here since this is a bit different than what this issue was originally about.

@ghost
Copy link

ghost commented Apr 8, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants