Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_kubernetes_cluster - unable to increase min_node value when auto scale is enabled #8576

Closed
leesutcliffe opened this issue Sep 22, 2020 · 7 comments · Fixed by #8619
Closed

Comments

@leesutcliffe
Copy link
Contributor

leesutcliffe commented Sep 22, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.29

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "example" {
  name                            = "example-aks"
  location                        = azurerm_resource_group.example.location
  resource_group_name             = azurerm_resource_group.example.name
  kubernetes_version              = "1.16.13"
  node_resource_group             = "example-rg"
  dns_prefix                      = "aks-example"
  enable_pod_security_policy      = false
  private_cluster_enabled         = false
  api_server_authorized_ip_ranges = null
 
  default_node_pool {
    name                = "default"
    enable_auto_scaling = true
    min_count           = 3
    max_count           = 8
    vm_size             = "Standard_D2_v2"
    os_disk_size_gb     = 30
    vnet_subnet_id      = "/subscriptions/*/resourceGroups/example-rg/providers/Microsoft.Network/virtualNetworks/example-vnet/subnets/example-snet-k8s"
    max_pods            = 60
    type                = "VirtualMachineScaleSets"
    availability_zones  = ["1", "2", "3"]
    orchestrator_version = "1.16.13"
  }

Expected Behavior

At the time of testing AKS had 2 nodes in the default node pool, intention was to increase the minimum node_count to 3

After changing the value for min_nodes from 2 to 3, the expected result would be for a Terraform plan to be applied successfully.
AKS would auto-scale from a minimum of 2 nodes, to 3.
It is noted that node_count is optional when enable_auto_scaling is set to true

Actual Behavior

Terraform apply step fails with the following output

Error: expanding `default_node_pool`: `node_count`(2) must be equal to or greater than `min_count`(3) when `enable_auto_scaling` is set to `true`

node_count is not set in the Terraform configuration but is present in the state file.
Removing node_count from the state file yields the same result, moreover, "node_count": 2, is returned to the state file after a terraform apply

Steps to Reproduce

  1. AKS Cluster running with two nodes with auto scale configured
  2. Change min_nodes = 2 to min_nodes = 3
  3. terraform plan
  4. terraform apply

Important Factoids

References

https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html

@neil-yechenwei
Copy link
Contributor

neil-yechenwei commented Sep 24, 2020

Thanks for opening this issue. Per the document, node_count would be set with min_count when node_count isn't set and enable_auto_scaling is true. So I assume it's expect behavior. I assume you have to explicitly set node_count and it has to be equal or greater than min_count when min_count is updated.

@leesutcliffe
Copy link
Contributor Author

leesutcliffe commented Sep 24, 2020

Thanks for the reply @neil-yechenwei
I cannot see anywhere in the page you reference where it states "node_count would be set with min_count when node_count isn't set and enable_auto_scaling is true"

That said I have attempted to explicitly set node_count to be equal or greater than min_count

The following error is from a cluster that currently has 3 nodes and attempting to set min_count = 4 and node_count is not set

Error: expanding `default_node_pool`: `node_count`(3) must be equal to or greater than `min_count`(4) when `enable_auto_scaling` is set to `true`

The following error is from another attempt to set min_count = 4, this time explicitly setting node_count = 4

Error: expanding `default_node_pool`: cannot change `node_count` when `enable_auto_scaling` is set to `true`

@ghost ghost removed the waiting-response label Sep 24, 2020
@neil-yechenwei
Copy link
Contributor

neil-yechenwei commented Sep 24, 2020

I assume the words "node_count - (Optional) The initial number of nodes which should exist in this Node Pool. If specified this must be between 1 and 100 and between min_count and max_count." in document already indicates node_count would be initialized.

I think the error "Error: expanding default_node_pool: cannot change node_count when enable_auto_scaling is set to true" is expected error, which means you cannot update node_count when enable_auto_scaling is set to true.

For more usage problem, please raise question on HashiCorp Community Forums. Thanks.

@leesutcliffe
Copy link
Contributor Author

I am aware of the expected error: "Error: expanding default_node_pool: cannot change node_count when enable_auto_scaling is set to true" - this was just for demonstration purposes, given your comment previous comment.

But the original issue still stands - being unable to increase min_count, even when node_count is not set (although it does appear in the state file)

@neil-yechenwei
Copy link
Contributor

neil-yechenwei commented Sep 25, 2020

Yes. I assume you're right. I updating the code for comparing min_count with node_count when enable_auto_scaling is true because node_count at this time is useless while updating. So I submit a PR to fix it. Thanks.

@ghost
Copy link

ghost commented Oct 29, 2020

This has been released in version 2.34.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.34.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Nov 22, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Nov 22, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
4 participants