Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to upgrade to 4.0.0 and getting a panic on azurerm_kubernetes_cluster #27181

Closed
jharlow1 opened this issue Aug 23, 2024 · 1 comment · Fixed by #27183
Closed

Trying to upgrade to 4.0.0 and getting a panic on azurerm_kubernetes_cluster #27181

jharlow1 opened this issue Aug 23, 2024 · 1 comment · Fixed by #27183

Comments

@jharlow1
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave comments along the lines of "+1", "me too" or "any updates", they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

Terraform (and AzureRM Provider) Version

  • Terraform Core version: 1.95
  • AzureRM Provider version: 4.0.0

Affected Resource(s)/Data Source(s)

azurerm_kubernetes_cluster

Terraform Configuration Files

esource "azurerm_kubernetes_cluster" "cluster" {
  name                                = local.cluster_name
  location                            = azurerm_resource_group.aks.location
  resource_group_name                 = azurerm_resource_group.aks.name
  node_resource_group                 = var.node_resource_group_name
  dns_prefix                          = local.cluster_name
  kubernetes_version                  = var.kubernetes_version
  private_cluster_enabled             = var.enable_private_api
  workload_identity_enabled           = var.enable_workload_identity
  oidc_issuer_enabled                 = var.enable_oidc_issuer
  sku_tier                            = var.sku_tier
  automatic_upgrade_channel           = var.automatic_upgrade_channel
  node_os_upgrade_channel             = var.node_os_upgrade_channel
  private_cluster_public_fqdn_enabled = var.enable_public_api_dns
  role_based_access_control_enabled   = var.enable_rbac

  azure_active_directory_role_based_access_control {
    azure_rbac_enabled     = false
    admin_group_object_ids = [var.aks_admin_group_id]
    tenant_id              = var.tenant_id
  }

  api_server_access_profile {
    authorized_ip_ranges = var.enable_private_api ? [] : var.api_authorized_subnets
  }

  auto_scaler_profile {
    balance_similar_node_groups   = var.balance_similar_node_groups
    skip_nodes_with_local_storage = var.skip_nodes_with_local_storage
  }

  default_node_pool {
    name                         = var.default_nodepool_config.name
    vnet_subnet_id               = data.azurerm_subnet.private_subnet.id
    vm_size                      = var.default_nodepool_config.vm_size
    auto_scaling_enabled         = true
    min_count                    = var.default_nodepool_config.min_count
    max_count                    = var.default_nodepool_config.max_count
    max_pods                     = "110"
    host_encryption_enabled      = false
    node_public_ip_enabled       = var.default_nodepool_config.enable_node_public_ip
    os_disk_size_gb              = var.default_nodepool_config.disk_size
    temporary_name_for_rotation  = "${var.default_nodepool_config.name}tmp"
    zones                        = var.default_nodepool_config.zones
    only_critical_addons_enabled = var.create_user_nodepools

    tags = merge({
      Environment = title(var.environment)
      Stack       = var.stack
    }, var.tags)

I can provide more details on the variable values needed.

### Description / Feedback

│ Error: Request cancelled
│
│   with azurerm_kubernetes_cluster.cluster,
│   on main.tf line 11, in resource "azurerm_kubernetes_cluster" "cluster":
│   11: resource "azurerm_kubernetes_cluster" "cluster" {
│
│ The plugin.(*GRPCProvider).ApplyResourceChange request was cancelled.
╵

Stack trace from the terraform-provider-azurerm_v4.0.0_x5 plugin:

panic: interface conversion: interface {} is nil, not map[string]interface {}

goroutine 97 [running]:
github.com/hashicorp/terraform-provider-azurerm/internal/services/containers.expandKubernetesClusterAPIAccessProfile(0x140012be210?)
	github.com/hashicorp/terraform-provider-azurerm/internal/services/containers/kubernetes_cluster_resource.go:3357 +0x66c
github.com/hashicorp/terraform-provider-azurerm/internal/services/containers.resourceKubernetesClusterUpdate(0x1400471a980, {0x1093e6f00?, 0x14001f3a480?})
	github.com/hashicorp/terraform-provider-azurerm/internal/services/containers/kubernetes_cluster_resource.go:2240 +0xdb8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).update(0x10a46beb8?, {0x10a46beb8?, 0x14004a1fa40?}, 0xd?, {0x1093e6f00?, 0x14001f3a480?})
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:800 +0x134
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x14000d495e0, {0x10a46beb8, 0x14004a1fa40}, 0x14001f8d110, 0x1400471a080, {0x1093e6f00, 0x14001f3a480})
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:919 +0x658
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x1400095ddd0, {0x10a46beb8?, 0x14004a1f950?}, 0x14004e914a0)
	github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1078 +0xb08
github.com/hashicorp/terraform-plugin-mux/tf5muxserver.(*muxServer).ApplyResourceChange(0x10a46bef0?, {0x10a46beb8?, 0x14004a1f650?}, 0x14004e914a0)
	github.com/hashicorp/[email protected]/tf5muxserver/mux_server_ApplyResourceChange.go:36 +0x184
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x14001aa63c0, {0x10a46beb8?, 0x14004a1ec60?}, 0x14004eb15e0)
	github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:865 +0x2b0
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x10a109640?, 0x14001aa63c0}, {0x10a46beb8, 0x14004a1ec60}, 0x14004e9f480, 0x0)
	github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:518 +0x164
google.golang.org/grpc.(*Server).processUnaryRPC(0x140001e1000, {0x10a46beb8, 0x14004a1eba0}, {0x10a4987c0, 0x14000271080}, 0x140023ebb00, 0x140012b8900, 0x10f9eda18, 0x0)
	google.golang.org/[email protected]/server.go:1369 +0xba0
google.golang.org/grpc.(*Server).handleStream(0x140001e1000, {0x10a4987c0, 0x14000271080}, 0x140023ebb00)
	google.golang.org/[email protected]/server.go:1780 +0xc80
google.golang.org/grpc.(*Server).serveStreams.func2.1()
	google.golang.org/[email protected]/server.go:1019 +0x8c
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 55
	google.golang.org/[email protected]/server.go:1030 +0x150

Error: The terraform-provider-azurerm_v4.0.0_x5 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

References

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 30, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants