Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes cluster creation failed: expected one Org VDC Network from Capvcd type, but got 0 #1258

Closed
cbotha opened this issue Apr 29, 2024 · 2 comments · Fixed by #1266
Closed
Assignees

Comments

@cbotha
Copy link

cbotha commented Apr 29, 2024

Hello,

Terraform Version

Terraform v1.7.3
on darwin_amd64

  • provider registry.terraform.io/vmware/vcd v3.12.1

Affected Resource(s)

Please list the resources as a list, for example:

  • vcd_cse_kubernetes_cluster

Terraform Configuration Files

# Configure the VMware Cloud Director Provider
provider "vcd" {
  user                 = "none"
  password             = "none"
  auth_type            = "api_token"
  api_token            = ""
  org                  = "ORG"
  vdc                  = "VDC"
  url                  = ""
  max_retry_timeout    = 600
  allow_unverified_ssl = true
}

terraform {
  required_providers {
    vcd = {
      version = "~> 3.12"
      source  = "vmware/vcd"
    }
  }
}

data "vcd_catalog" "tkg_catalog" {
  org  = "ORG"
  name = "Catalog"
}

# Fetch a valid Kubernetes template OVA. If it's not valid, cluster creation will fail.
data "vcd_catalog_vapp_template" "tkg_ova" {
  org        = data.vcd_catalog.tkg_catalog.org
  catalog_id = data.vcd_catalog.tkg_catalog.id
  name       = "Ubuntu 22.04 and Kubernetes v1.28.4+vmware.1"
}

data "vcd_network_routed_v2" "routed" {
  org             = data.vcd_nsxt_edgegateway.existing.org
  edge_gateway_id = data.vcd_nsxt_edgegateway.existing.id
  name            = "Test-Routed"
}

data "vcd_vdc_group" "vdc_group" {
  name = "DC-GROUP"
}

data "vcd_nsxt_edgegateway" "existing" {
  name     = "EDGE"
  owner_id = data.vcd_vdc_group.vdc_group.id
}

data "vcd_org_vdc" "vdc" {
  name = "VDC"
  org = "ORG"
}

data "vcd_vm_sizing_policy" "tkgmedium" {
  name = "TKG medium"
}

data "vcd_storage_profile" "nlcp1" {
  name = "standard"
}

resource "vcd_cse_kubernetes_cluster" "tkgtest" {
  name = "tkgtest"
  cse_version = "4.2.1"
  runtime = "tkg"
  org = data.vcd_org_vdc.vdc.org
  vdc_id = data.vcd_org_vdc.vdc.id
  network_id = data.vcd_network_routed_v2.routed.id
  kubernetes_template_id = data.vcd_catalog_vapp_template.tkg_ova.id
  api_token_file         = "token.json"
  
  control_plane {
      machine_count      = 1
      disk_size_gi       = 20
      sizing_policy_id   = data.vcd_vm_sizing_policy.tkgmedium.id
      storage_profile_id = data.vcd_storage_profile.nlcp1.id
  }

  worker_pool {
  name               = "node-pool-1"
  machine_count      = 1
  disk_size_gi       = 20
  sizing_policy_id   = data.vcd_vm_sizing_policy.tkgmedium.id
  storage_profile_id = data.vcd_storage_profile.nlcp1.id
  }

  auto_repair_on_errors = false
  node_health_check     = false
  operations_timeout_minutes = 35
}

Debug Output

2024-04-26T12:35:00.009+0200 [TRACE] provider.terraform-provider-vcd_v3.12.1: Called downstream: tf_provider_addr=provider tf_req_id=54cd2a26-6b3d-decf-049f-9df05c9fb7f0 tf_resource_type=vcd_cse_kubernetes_cluster tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:910 @module=sdk.helper_schema timestamp="2024-04-26T12:35:00.009+0200"
2024-04-26T12:35:00.009+0200 [TRACE] provider.terraform-provider-vcd_v3.12.1: Received downstream response: tf_req_duration_ms="1.104303e+06" @module=sdk.proto tf_proto_version=5.4 tf_provider_addr=provider tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/tf5serverlogging/downstream_request.go:40 diagnostic_error_count=1 diagnostic_warning_count=0 tf_req_id=54cd2a26-6b3d-decf-049f-9df05c9fb7f0 tf_resource_type=vcd_cse_kubernetes_cluster timestamp="2024-04-26T12:35:00.009+0200"
2024-04-26T12:35:00.010+0200 [ERROR] provider.terraform-provider-vcd_v3.12.1: Response contains error diagnostic: tf_req_id=54cd2a26-6b3d-decf-049f-9df05c9fb7f0 tf_proto_version=5.4 @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/diag/diagnostics.go:62 diagnostic_summary="Kubernetes cluster creation failed: expected one Org VDC Network from Capvcd type, but got 0" tf_resource_type=vcd_cse_kubernetes_cluster tf_rpc=ApplyResourceChange diagnostic_severity=ERROR tf_provider_addr=provider @module=sdk.proto diagnostic_detail="" timestamp="2024-04-26T12:35:00.009+0200"
2024-04-26T12:35:00.010+0200 [TRACE] provider.terraform-provider-vcd_v3.12.1: Served request: @module=sdk.proto tf_provider_addr=provider tf_resource_type=vcd_cse_kubernetes_cluster @caller=github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:872 tf_proto_version=5.4 tf_req_id=54cd2a26-6b3d-decf-049f-9df05c9fb7f0 tf_rpc=ApplyResourceChange timestamp="2024-04-26T12:35:00.009+0200"
2024-04-26T12:35:00.010+0200 [TRACE] maybeTainted: vcd_cse_kubernetes_cluster.tkgtest encountered an error during creation, so it is now marked as tainted
2024-04-26T12:35:00.010+0200 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/vmware/vcd" is in the global cache
2024-04-26T12:35:00.011+0200 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for vcd_cse_kubernetes_cluster.tkgtest
2024-04-26T12:35:00.011+0200 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: removing state object for vcd_cse_kubernetes_cluster.tkgtest
2024-04-26T12:35:00.011+0200 [TRACE] evalApplyProvisioners: vcd_cse_kubernetes_cluster.tkgtest is tainted, so skipping provisioning
2024-04-26T12:35:00.011+0200 [TRACE] maybeTainted: vcd_cse_kubernetes_cluster.tkgtest was already tainted, so nothing to do
2024-04-26T12:35:00.011+0200 [TRACE] terraform.contextPlugins: Schema for provider "registry.terraform.io/vmware/vcd" is in the global cache
2024-04-26T12:35:00.011+0200 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for vcd_cse_kubernetes_cluster.tkgtest
2024-04-26T12:35:00.011+0200 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: removing state object for vcd_cse_kubernetes_cluster.tkgtest
2024-04-26T12:35:00.011+0200 [TRACE] statemgr.Filesystem: reading latest snapshot from terraform.tfstate
2024-04-26T12:35:00.011+0200 [TRACE] statemgr.Filesystem: snapshot file has nil snapshot, but that's okay
2024-04-26T12:35:00.011+0200 [TRACE] statemgr.Filesystem: read nil snapshot
2024-04-26T12:35:00.011+0200 [TRACE] statemgr.Filesystem: no original state snapshot to back up
2024-04-26T12:35:00.011+0200 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 1
2024-04-26T12:35:00.011+0200 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2024-04-26T12:35:00.032+0200 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-04-26T12:35:00.032+0200 [ERROR] vertex "vcd_cse_kubernetes_cluster.tkgtest" error: Kubernetes cluster creation failed: expected one Org VDC Network from Capvcd type, but got 0
2024-04-26T12:35:00.032+0200 [TRACE] vertex "vcd_cse_kubernetes_cluster.tkgtest": visit complete, with errors
2024-04-26T12:35:00.033+0200 [TRACE] dag/walk: upstream of "provider["registry.terraform.io/vmware/vcd"] (close)" errored, so skipping
2024-04-26T12:35:00.033+0200 [TRACE] dag/walk: upstream of "root" errored, so skipping
2024-04-26T12:35:00.033+0200 [TRACE] statemgr.Filesystem: reading latest snapshot from terraform.tfstate
2024-04-26T12:35:00.034+0200 [TRACE] statemgr.Filesystem: read snapshot with lineage "3f37a489-b2d2-ce18-1194-d49e859a2fd3" serial 1
2024-04-26T12:35:00.034+0200 [TRACE] statemgr.Filesystem: no original state snapshot to back up
2024-04-26T12:35:00.035+0200 [TRACE] statemgr.Filesystem: no state changes since last snapshot
2024-04-26T12:35:00.035+0200 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2024-04-26T12:35:00.057+0200 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2024-04-26T12:35:00.058+0200 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2024-04-26T12:35:00.062+0200 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-04-26T12:35:00.067+0200 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/vmware/vcd/3.12.1/darwin_amd64/terraform-provider-vcd_v3.12.1 pid=96848
2024-04-26T12:35:00.067+0200 [DEBUG] provider: plugin exited

Expected Behavior

No errors following deployment.

The tanzu cluster does in fact deploy successfully from from a VCD perspective. It becomes available is is completely usable.
It's only the terraform provider that errors out

Actual Behavior

Error: Kubernetes cluster creation failed: expected one Org VDC Network from Capvcd type, but got 0
State is not updated with a successful deployment

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
@adambarreiro
Copy link
Collaborator

Hi @cbotha,

Thanks for reporting, I'll be working on this on vmware/go-vcloud-director#674 and #1266

@adambarreiro
Copy link
Collaborator

This is now fixed in the main branch, ready to go for the next release.

Would you like to try it out, you can clone the repo and build/install the provider with make install.

Feedback would be great 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants