-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue destroying EMR on AWS #3465
Comments
I've observed this behavior as well. |
We're experiencing the same issue, currently only with a security configuration. It seems like Terraform thinks the EMR cluster has been removed and continues to remove the other resources that the EMR cluster depended on, but the EMR cluster isn't actually removed yet, which causes the destroy of the security configuration to fail because it's still in use. |
We are facing the same issue as well, for the time being, our "solution" has been:
This causes an execution flow without errors, but it may be hard to fully automate this process with a script as step 2 above would require a CLI command to determine if the EMR cluster is terminated but I believe that CLI command is the same bugged command that is reporting prematurely back to terraform. |
👍 |
Or, another way to do it is to use a shell script with a loop knowing that terraform will return a "0" exit code if it's finished with no error.
|
Hi, Any updated by when it will be fixed? |
im having this issue as well when i change the security configuration. it tries to destroy the old one and aws reports its still in use, if i wait 5 seconds and run again all is well |
We have been facing this issue as well. While destroying EMR with terraform. so we can never create no error workflow. Cluster gets destroyed and its not hampering our work. We use the terraform version:
IS this issue fixed in terraform version 0.12? or timeline on when it will be fixed. or any workarounds we can use to fixe this issue? |
any updates on this? |
Seems like a temporary fix is to use local-exec provisioner and issue a sleep and hope for the best that the cluster is done terminating before Terraform destrys the security configuration. |
This functionality has been released in v3.70.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
This issue was originally opened by @miloup as hashicorp/terraform#17330. It was migrated here as a result of the provider split. The original body of the issue is below.
Hi,
Many of us in many company are facing an issue while trying to destroy an EMR infrastructure on AWS.
The Terraform folder has an emr-requirements.tf file which contains Security-Groups, Security-Configuration,...etc. and an emr.tf file which creates the cluster using the configuration is the emr-requirements.tf
When running "terraform apply", the infrastructure is successfully created, but when running "terraform destroy", it seems that Terraform does not wait for the EMR cluster to terminate before destroying the remaining resources which leads to a failure (timeout) due to their dependencies to this cluster. The only way to have a clean "destroy" is by making sure that the EMR cluster is terminated (by checking on the AWS console for instance), then run "terraform destroy" again. Now, all the remaining resources are destroyed.
Would you please fix this bug?
Thanks
The text was updated successfully, but these errors were encountered: