-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AKS node pool k8s version not being updated #5541
Comments
Just checked -- the issue also occurs in EDIT: I take it back. I've been EDIT2: Double-takeback, there's a bug in my testbed. Re-starting. |
A |
I'm having trouble finding canon on what changed between azure-sdk/container-service If I were to guess, multiple agent pools is a new feature in AKS, so that probably drove the agent pool k8s upgrade logic's separation from the AKS k8s upgrade logic. |
@tombuildsstuff I'd like to contribute the PR, but would appreciate some guidance.
|
Yeah this is likely a change in behaviour between the different versions of the Container Service API
👍 we support multiple node pools via the
Alternatively we can make them all configurable - but since the default node pool's hosting system jobs it's treated a little differently to the other node pools so this probably wants some testing to confirm which way to go - maybe @jluk can confirm the expected behaviour here? |
The ruleset between control plane and agent pools are defined in this public document. There is a window of config drift you are allowed to have between the control plane and each agent pool. Rules for valid versions to upgrade node pools: The node pool version must have the same major version as the control plane. Hope this helps. |
Great, thanks @jluk :) @tombuildsstuff:
|
@tombuildsstuff I'm going to work on this tomorrow according to my previous comment, please shout if that's the wrong thing to do 🙂 |
That sounds fine for now, however the field doesn't want exposing to users currently and instead wants to be an internal behaviour, until..
Adding properties for Hope that helps :) |
Per hashicorp#5541, currently AKS node pool versions can never be updated. This occurred due to a change in behavior in new ARM behavior now that AKS clusters can have multiple agent pools. There's a more involved fix in the discussion on that issue which involves exposing OrchestratorVersion as a settable attribute on agent_pool_profiles, but this change focuses on recovering the old behavior.
Coupling by default could work, as long as it can be disabled. Upgrading control plane and default node pool with one terraform apply could be very impactful to a cluster. I'm commenting to vote on exposing OrchestratorVersion for default node pool as well as the node pool resource. I would rather have OrchestratorVersion exposed ASAP without the proper locking even if it means I can control it. My current plans are to use the GRAPH API directly to upgrade node pools. https://docs.microsoft.com/en-us/rest/api/aks/agentpools/createorupdate I'm available (with Azure resources as well) to test potential patches and I am able to write golang, but I lack terraform internals experience. Let me know how I can help. |
Using azurerm provider at version "=2.1.0", I've upgraded an azurerm_kubernetes_cluster resource from 1.14 to 1.15. The Control Plane has seemed to upgrade to 1.15, but the VMSS node pool has stayed behind at 1.14. What is the current expected behavior Kubernetes upgrade and their node pools? I understand (https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#validation-rules-for-upgrades) expresses certain validation conditions. Will the node pool upgrade when it's obligated to? or can we control this as @jstevans suggested with an input? |
I'd like to know this! |
@tombuildsstuff 🆙 |
This has been released in version 2.14.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 2.14.0"
}
# ... other configuration ... |
Hello, |
Have you set the new |
@EPinci Hello, I haven't. Lemme try! |
@EPinci Hey man, really appreciate the quick comment! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
terraform
version: 0.12.8azurerm provider
version: 1.41Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Expected Behavior
With
azurerm_provider == 1.39
, the kubelet version in our cluster's node pool would be updated (in addition to the AKS k8s version) by changingvar.kubernetes_version
Actual Behavior
With
azurerm_provider == 1.41
, the kubelet version in our cluster's node pool is not updated (in addition to the AKS k8s version) by changingvar.kubernetes_version
Important Factoids
azurerm_provider == 1.39
and doingterraform apply
seems to perform the expected behavior, even if the exact same config was previously run withazurerm_provider == 1.41
.The text was updated successfully, but these errors were encountered: