Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for accessing data about single AKS cluster node pool #5134

Closed
jmcshane opened this issue Dec 11, 2019 · 3 comments · Fixed by #7233
Closed

Support for accessing data about single AKS cluster node pool #5134

jmcshane opened this issue Dec 11, 2019 · 3 comments · Fixed by #7233

Comments

@jmcshane
Copy link
Contributor

jmcshane commented Dec 11, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

Move the agent_pool_profiles field out of the azurerm_kubernetes_cluster data source into its own field, corresponding to the way that the resources are managed after the changes in #4899.

New or Affected Resource(s)

  • azurerm_kubernetes_cluster_node_pool

Potential Terraform Configuration

data "azurerm_kubernetes_cluster_node_pool" "example" {
  name                  = "internal"
  kubernetes_cluster_id = data.azurerm_kubernetes_cluster.example.id
}

Returns spec similar to the agent_pool_profiles in the current azurerm_kuberenets_cluster data source now.

References

@jmcshane jmcshane changed the title Support for [thing] Support for accessing data about single AKS cluster node pool Dec 11, 2019
@maarek
Copy link

maarek commented Dec 17, 2019

@jmcshane Sorry, I didn't get a moment to write this up, so thanks for getting ahead of me.

Currently, another team creates the AKS clusters through some automated process of their own and then hands them off to the development teams to manage afterwards. This means that when an AKS cluster is done being provisioned, I am required to import it into my own terraform script.

The first thing that my script does to maintain state is to fetch the currently state of a cluster resource and then substitutes those values into the azurerm for things that my team does not control. This allows me to not have to manage all of the details of the cluster but only certain things such as node count, kubernetes version, and plugins.

My question in #4898 stemmed from the inconsistency between the naming where my data source was named differently than the new default_node_pool naming in azurerm. There is some discussions about whether the node pools will be created by the development team or not, but adding this data source would be helpful in a similar way as the cluster resources that I mentioned above.

@ghost
Copy link

ghost commented Jun 11, 2020

This has been released in version 2.14.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.14.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Jul 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jul 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants