-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forced recreating node_pool at any plan #120
Comments
This looks like it's caused by #114. Could you try the latest |
Tryed to use |
I do have the same problem. I think 8be6a89 introduced a regression. This is the relevant part of my Terraform plan:
|
I confirm that using tag v2.1.0 where that commit is not present, I cant reproduce the issue. |
@alexkonkin please check the above comments. Looks like #157 introduced a regression. |
I'm guessing this is an upstream provider issue, I have opened a provider bug: hashicorp/terraform-provider-google#3786 |
Resolved for me with these providers:
in 2.1.0 version of this module. |
@g0blin79 Can you confirm that |
Yes it does. I created a zonal cluster two weeks ago with that providers versions and with the 2.1.0 version of this module and it is working. |
Excellent, thank you. |
Once a simple zonal cluster with a node_pool is correctly created, if I run again a
terraform apply
without any changes, terraform want to destroy and recreate cluster and node_pool.This is my configuration:
As you probably note (presence of
remove_default_node_pool
in cluster config) I applied patch at #15 and, after that the problem is a little bit mitigated and terraform want to destroy and recreate only the node_pool. This is the output of a terraform planIs this could be related to this hashicorp/terraform-provider-google#2115?
Any help will be appreciated.
The text was updated successfully, but these errors were encountered: