Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubernetes_version for nodepools. #4327

Closed
asubmani opened this issue Sep 15, 2019 · 14 comments · Fixed by #7233
Closed

Add kubernetes_version for nodepools. #4327

asubmani opened this issue Sep 15, 2019 · 14 comments · Fixed by #7233

Comments

@asubmani
Copy link
Contributor

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

AKS supports having nodepools running different versions of Kubernetes. Please add property that allows setting kubernetes version for the agent_pool_profile

Affected Resource(s)

  • resource.azurerm_kubernetes_cluster
  • data.azurerm_kubernetes_cluster

Potential Terraform Configuration

resource "azurerm_kubernetes_cluster" "akscluster" {
  name                = var.cluster_name
  location            = data.azurerm_resource_group.aksrg.location
  resource_group_name = data.azurerm_resource_group.aksrg.name
  dns_prefix          = var.dns_prefix
  kubernetes_version  = var.k8s_version

agent_pool_profile {
    name            = var.agent_pool_name
    count           = var.agent_count
    vm_size         = var.vm_size #"Standard_B2ms"
    os_type         = "Linux"
    os_disk_size_gb = 120
    vnet_subnet_id  = var.aks_subnetId
    node_pool_kubernetes_version = 1.14.5 # Optional parameter. If specified, this will overrides "kubernetes_version", which is a cluster wide setting.
    max_pods = 110
  }

References

@r0bnet
Copy link
Contributor

r0bnet commented Sep 17, 2019

Hey @asubmani,
I'd rather call it orchestrator_version to comply with the name in the API. Is that okay for you?

Also how about adding an own resource for the agent_pool as it also exists in the API as own resource type?

@r0bnet
Copy link
Contributor

r0bnet commented Sep 18, 2019

PR: #4355

@asubmani
Copy link
Contributor Author

Hey @asubmani,
I'd rather call it orchestrator_version to comply with the name in the API. Is that okay for you?

Also how about adding an own resource for the agent_pool as it also exists in the API as own resource type?

orchestrator_version is perfect 👌. Another thing to note is that the orchestrator_version should probably comply with the kubernetes_version. E.g. Assuming AKS supports only having N-3 version for the new orchestrator_version where N being the kubernetes_version; it would be great if we can check that in a terraform plan.

Possible scenario I am looking to avoid is: kubernetes_version = 1.11
orchestrator_version = 1.16

ERROR: Orchestrator version cannot be higher than control plane or kubelet ...

@asubmani
Copy link
Contributor Author

Hey @asubmani,
I'd rather call it orchestrator_version to comply with the name in the API. Is that okay for you?
Also how about adding an own resource for the agent_pool as it also exists in the API as own resource type?

orchestrator_version is perfect 👌. Another thing to note is that the orchestrator_version should probably comply with the kubernetes_version. E.g. Assuming AKS supports only having N-3 version for the new orchestrator_version where N being the kubernetes_version; it would be great if we can check that in a terraform plan.

Possible scenario I am looking to avoid is: kubernetes_version = 1.11
orchestrator_version = 1.16

ERROR: Orchestrator version cannot be higher than control plane or kubelet ...

  • @sauryadas > would you know about nodepool version to AKS cluster version dependencies. I presume there will be some restriction/best practices when kubernetes version and nodepool orchestrator are of different versions.

@r0bnet
Copy link
Contributor

r0bnet commented Sep 27, 2019

Hey @asubmani,
I'd rather call it orchestrator_version to comply with the name in the API. Is that okay for you?
Also how about adding an own resource for the agent_pool as it also exists in the API as own resource type?

orchestrator_version is perfect 👌. Another thing to note is that the orchestrator_version should probably comply with the kubernetes_version. E.g. Assuming AKS supports only having N-3 version for the new orchestrator_version where N being the kubernetes_version; it would be great if we can check that in a terraform plan.

Possible scenario I am looking to avoid is: kubernetes_version = 1.11
orchestrator_version = 1.16

ERROR: Orchestrator version cannot be higher than control plane or kubelet ...

I thought of something like that but i didn't find any constraints in that regard. So i completely omitted a validation and hoping that either the API is validating it correctly and/or the user knows what she/he does.

The only thing that seems valid to me would be to check if the kubernetes_version must be "higher" than the orchestrator_version. This should be easy by using a semver lib.

@jluk
Copy link

jluk commented Oct 1, 2019

@asubmani @r0bnet the versions do need to adhere to some rules, defined here: https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#upgrade-a-cluster-control-plane-with-multiple-node-pools

However, IMO it's better to defer the validation to AKS as the source of truth so you're not trying to constantly stay in sync with AKS if the rules change over time.

@jluk
Copy link

jluk commented Oct 1, 2019

To illustrate the moving target of rules for TF and a desire to avoid it - the supported window of versions for AKS is moving from N-3 to N-2 in a few months to align with Kubernetes supported windows (mentioned here: Azure/AKS#1235).

@tombuildsstuff tombuildsstuff modified the milestones: v1.35.0, v1.36.0 Oct 2, 2019
@r0bnet
Copy link
Contributor

r0bnet commented Oct 18, 2019

Can't be done according to MS support. Reason: #4355 (comment)

@tombuildsstuff tombuildsstuff removed this from the v1.36.0 milestone Oct 22, 2019
@evenh
Copy link
Contributor

evenh commented Jan 10, 2020

Do you know if this constraint is still valid @r0bnet? As far as I can see in azure-sdk-for-go version 2019-10-01 the structs ManagedClusterAgentPoolProfile and ManagedClusterAgentPoolProfileProperties contains OrchestratorVersion which is not documented as read-only.

There is also this comment from @tombuildsstuff: 659816f#diff-6947e91c40156730f5c531d15ca3d798R151

@r0bnet
Copy link
Contributor

r0bnet commented Jan 10, 2020

@evenh you mean the version contraints? I think so. At least documentation states that.

Regarding setting the version at all (even with the default agent pool) it might work but i'm not sure. You should check if it's working. Just add the orchestrator version to the agent pool and deploy a cluster with a different version as the master has.

@jluk
Copy link

jluk commented Jan 10, 2020

Also not sure what constraint is being mentioned - but the node pool vs control plane upgrade constraints are here:

https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#upgrade-a-cluster-control-plane-with-multiple-node-pools

@asubmani
Copy link
Contributor Author

asubmani commented Apr 8, 2020

The differences might be in "default" nodepool vs "additional node pools" The SDK/API allows (I have done this using az aks nodepool upgrade as per https://docs.microsoft.com/en-us/cli/azure/ext/aks-preview/aks/nodepool?view=azure-cli-latest#ext-aks-preview-az-aks-nodepool-upgrade

@jluk > regardless of N-3 or N-2, the nodepool (leaving aside the default pool created at cluster creation time) cannot be higher than the "control plane". So checking this during terraform plan is a valid scenario. Probably, the SDK/A{I may not allow changing the "default nodepool" which would always follow the version set for the "Control plane"

az cli uses "kubernetes-version" to indicate the version of kubernetes for the nodepool

@ghost
Copy link

ghost commented Jun 11, 2020

This has been released in version 2.14.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.14.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Jul 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jul 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.