-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
azurerm_kubernetes_cluster - unable to increase min_node value when auto scale is enabled #8576
Comments
Thanks for opening this issue. Per the document, node_count would be set with min_count when node_count isn't set and enable_auto_scaling is true. So I assume it's expect behavior. I assume you have to explicitly set node_count and it has to be equal or greater than min_count when min_count is updated. |
Thanks for the reply @neil-yechenwei That said I have attempted to explicitly set node_count to be equal or greater than min_count The following error is from a cluster that currently has 3 nodes and attempting to
The following error is from another attempt to set
|
I assume the words "node_count - (Optional) The initial number of nodes which should exist in this Node Pool. If specified this must be between 1 and 100 and between min_count and max_count." in document already indicates node_count would be initialized. I think the error "Error: expanding For more usage problem, please raise question on HashiCorp Community Forums. Thanks. |
I am aware of the expected error: "Error: expanding default_node_pool: cannot change node_count when enable_auto_scaling is set to true" - this was just for demonstration purposes, given your comment previous comment. But the original issue still stands - being unable to increase min_count, even when node_count is not set (although it does appear in the state file) |
Yes. I assume you're right. I updating the code for comparing min_count with node_count when enable_auto_scaling is true because node_count at this time is useless while updating. So I submit a PR to fix it. Thanks. |
This has been released in version 2.34.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 2.34.0"
}
# ... other configuration ... |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.12.29
Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Expected Behavior
At the time of testing AKS had 2 nodes in the default node pool, intention was to increase the minimum node_count to 3
After changing the value for
min_nodes
from 2 to 3, the expected result would be for a Terraform plan to be applied successfully.AKS would auto-scale from a minimum of 2 nodes, to 3.
It is noted that node_count is optional when
enable_auto_scaling
is set to trueActual Behavior
Terraform apply step fails with the following output
node_count
is not set in the Terraform configuration but is present in the state file.Removing
node_count
from the state file yields the same result, moreover,"node_count": 2,
is returned to the state file after aterraform apply
Steps to Reproduce
min_nodes = 2
tomin_nodes = 3
terraform plan
terraform apply
Important Factoids
References
https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html
The text was updated successfully, but these errors were encountered: