diff --git a/docs/user/README.md b/docs/user/README.md index 5c0004cf0..17085e643 100644 --- a/docs/user/README.md +++ b/docs/user/README.md @@ -88,15 +88,14 @@ Please ensure that you have 3 (or 5) control plane machines before creating the The control plane machine set is currently supported for a number of platforms and OpenShift versions. The matrix shows in detail the support for each specific combination. -| Platform \ OpenShift version | <=4.11 | 4.12 | 4.13 | -|------------------------------|:---------------|:--------------------|:--------------------| -| AWS | Not Supported | Full | Full | -| Azure | Not Supported | Manual | Full | -| GCP | Not Supported | Not Supported | Full | -| VSphere | Not Supported | Manual (Single Zone)| Manual (Single Zone)| -| Other Platforms | Not Supported | Not Supported | Not Supported | - -> Note: Google Cloud Platform and OpenStack are planned for inclusion from OpenShift version 4.13 onwards. +| Platform \ OpenShift version | <=4.11 | 4.12 | 4.13 | 4.14 | +|------------------------------|:---------------|:--------------------|:--------------------|:--------------------| +| AWS | Not Supported | Full | Full | Full | +| Azure | Not Supported | Manual | Full | Full | +| GCP | Not Supported | Not Supported | Full | Full | +| OpenStack | Not Supported | Not Supported | Not Supported | Full | +| VSphere | Not Supported | Manual (Single Zone)| Manual (Single Zone)| Manual (Single Zone)| +| Other Platforms | Not Supported | Not Supported | Not Supported | Not Supported | #### Keys diff --git a/docs/user/failure-domains.md b/docs/user/failure-domains.md index b12c874e1..e5fb44358 100644 --- a/docs/user/failure-domains.md +++ b/docs/user/failure-domains.md @@ -83,3 +83,24 @@ An Azure failure domain will look something like the example below: ```yaml - zone: "" ``` + +## OpenStack + +On OpenStack, the failure domains represented in the control plane machine set can be considered analogous to the +OpenStack availability zones (for Nova and Cinder). + +> OpenStack Availability Zones are an end-user visible logical abstraction for partitioning an OpenStack cloud without +> knowing the physical infrastructure. They are used to partition a cloud on arbitrary factors, such as location (country, datacenter, rack), +> network layout and/or power source. +> Compute (Nova) Availability Zones are presented under Host Aggregates and can help to group the compute nodes +> associated with a particular Failure Domain. +> Storage (Cinder) Availability Zones are presented under Availability Zones and help to group the storage backend types by Failure Domain. +> Depending on how the cloud is deployed, a storage backend can expand across multiple Failure Domains or be limited to a single Failure Domain. +> The name of the Availability Zones depend on the cloud deployment and can be retrieved from the OpenStack administrator. + +An OpenStack failure domain will look something like the example below: +```yaml +- availabilityZone: "" + rootVolume: + availabilityZone: "" +``` diff --git a/docs/user/installation.md b/docs/user/installation.md index f0507d82b..57564e457 100644 --- a/docs/user/installation.md +++ b/docs/user/installation.md @@ -238,3 +238,41 @@ failureDomains: > Note: The `targetPools` field may not be set on the GCP providerSpec. This field is required for control plane machines and you should populate this on both the Machine and the ControlPlaneMachineSet resource specs. + +#### Configuring a control plane machine set on OpenStack + +Two fields are supported for now: `availabilityZone` (instance AZ) and `rootVolume.availabilityZone` (root volume AZ). +Gather the existing control plane machines and note the value of the zones of each if they exist. +Aside from these fields, the remaining in spec the machines should be identical. + +Copy the value from one of the machines into the `providerSpec.value` (6) on the example above. +Remove the AZ fields from the `providerSpec.value` once you have done that. + +For each AZ you have in the cluster, configure a failure domain like below: +```yaml +- availabilityZone: "" + rootVolume: + availabilityZone: "" +``` + +With these zones, the complete `failureDomains` (4 and 5) on the example above should look something like below: +```yaml +failureDomains: + platform: OpenStack + openstack: + - availabilityZone: nova-az0 + rootVolume: + availabilityZone: cinder-az0 + - availabilityZone: nova-az1 + rootVolume: + availabilityZone: cinder-az1 + - availabilityZone: nova-az2 + rootVolume: + availabilityZone: cinder-az2 +``` + +Prior to 4.14, if the masters were configured with Availability Zones (AZ), the installer (via Terraform) would create +one ServerGroup in OpenStack (the one initially created for master-0, ending with the name of the AZ) but configure +the Machine ProviderSpec with different ServerGroups, one per AZ. +So if you upgrade a cluster from a previous release to 4.14, you'll need to follow this [solution](https://access.redhat.com/solutions/7013893). + diff --git a/go.mod b/go.mod index bec41a4a2..3d785df0d 100644 --- a/go.mod +++ b/go.mod @@ -7,10 +7,11 @@ require ( github.com/go-test/deep v1.1.0 github.com/golang/mock v1.6.0 github.com/golangci/golangci-lint v1.52.2 + github.com/google/uuid v1.3.0 github.com/onsi/ginkgo/v2 v2.9.5 github.com/onsi/gomega v1.27.7 github.com/openshift/api v0.0.0-20230627091025-b88ff67980ac - github.com/openshift/client-go v0.0.0-20230503144108-75015d2347cb + github.com/openshift/client-go v0.0.0-20230607134213-3cd0021bbee3 github.com/openshift/cluster-api-actuator-pkg/testutils v0.0.0-20230622171654-75c6bcfa831c github.com/openshift/library-go v0.0.0-20230523150659-ab179469ba38 github.com/spf13/pflag v1.0.5 @@ -97,7 +98,6 @@ require ( github.com/google/go-cmp v0.5.9 // indirect github.com/google/gofuzz v1.2.0 // indirect github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 // indirect - github.com/google/uuid v1.3.0 // indirect github.com/gordonklaus/ineffassign v0.0.0-20230107090616-13ace0543b28 // indirect github.com/gostaticanalysis/analysisutil v0.7.1 // indirect github.com/gostaticanalysis/comment v1.4.2 // indirect diff --git a/go.sum b/go.sum index bbe9abaab..33d49074c 100644 --- a/go.sum +++ b/go.sum @@ -447,8 +447,8 @@ github.com/onsi/gomega v1.27.7 h1:fVih9JD6ogIiHUN6ePK7HJidyEDpWGVB5mzM7cWNXoU= github.com/onsi/gomega v1.27.7/go.mod h1:1p8OOlwo2iUUDsHnOrjE5UKYJ+e3W8eQ3qSlRahPmr4= github.com/openshift/api v0.0.0-20230627091025-b88ff67980ac h1:bY6f6tb7ZUNb6Lfsm3r3SMwcDvbvGXjYV+caTnAjVRA= github.com/openshift/api v0.0.0-20230627091025-b88ff67980ac/go.mod h1:4VWG+W22wrB4HfBL88P40DxLEpSOaiBVxUnfalfJo9k= -github.com/openshift/client-go v0.0.0-20230503144108-75015d2347cb h1:Nij5OnaECrkmcRQMAE9LMbQXPo95aqFnf+12B7SyFVI= -github.com/openshift/client-go v0.0.0-20230503144108-75015d2347cb/go.mod h1:Rhb3moCqeiTuGHAbXBOlwPubUMlOZEkrEWTRjIF3jzs= +github.com/openshift/client-go v0.0.0-20230607134213-3cd0021bbee3 h1:uVCq/Sx2y4UZh+qCsCL1BBUJpc3DULHkN4j7XHHgHtw= +github.com/openshift/client-go v0.0.0-20230607134213-3cd0021bbee3/go.mod h1:M+VUIcqx5IvgzejcbgmQnxETPrXRYlcufHpw2bAgz9Y= github.com/openshift/cluster-api-actuator-pkg/testutils v0.0.0-20230622171654-75c6bcfa831c h1:2EpVQ7ZZIvpm3PExUIjrIHknRAfyJBr0xjUJFQjYaxA= github.com/openshift/cluster-api-actuator-pkg/testutils v0.0.0-20230622171654-75c6bcfa831c/go.mod h1:w4P7zcu7okmBpkjKJK71rl5hp1a8RFm1NraVrxVqiUs= github.com/openshift/library-go v0.0.0-20230523150659-ab179469ba38 h1:rKEpSwRxeQ6eN915GbcuyikwyWu//V61w5zIUWD9b2U= diff --git a/pkg/controllers/controlplanemachinesetgenerator/aws.go b/pkg/controllers/controlplanemachinesetgenerator/aws.go index f635610f3..258833f90 100644 --- a/pkg/controllers/controlplanemachinesetgenerator/aws.go +++ b/pkg/controllers/controlplanemachinesetgenerator/aws.go @@ -93,7 +93,7 @@ func buildControlPlaneMachineSetAWSMachineSpec(logger logr.Logger, machines []ma } // buildAWSFailureDomains builds an AWS flavored FailureDomains for the ControlPlaneMachineSet. -func buildAWSFailureDomains(failureDomains *failuredomain.Set) machinev1.FailureDomains { +func buildAWSFailureDomains(failureDomains *failuredomain.Set) (machinev1.FailureDomains, error) { //nolint:unparam awsFailureDomains := []machinev1.AWSFailureDomain{} for _, fd := range failureDomains.List() { @@ -105,5 +105,5 @@ func buildAWSFailureDomains(failureDomains *failuredomain.Set) machinev1.Failure Platform: configv1.AWSPlatformType, } - return cpmsFailureDomain + return cpmsFailureDomain, nil } diff --git a/pkg/controllers/controlplanemachinesetgenerator/azure.go b/pkg/controllers/controlplanemachinesetgenerator/azure.go index 5c8be1f58..575d264f6 100644 --- a/pkg/controllers/controlplanemachinesetgenerator/azure.go +++ b/pkg/controllers/controlplanemachinesetgenerator/azure.go @@ -79,7 +79,7 @@ func buildControlPlaneMachineSetAzureMachineSpec(logger logr.Logger, machines [] } // buildAzureFailureDomains builds an Azure flavored FailureDomains for the ControlPlaneMachineSet. -func buildAzureFailureDomains(failureDomains *failuredomain.Set) machinev1.FailureDomains { +func buildAzureFailureDomains(failureDomains *failuredomain.Set) (machinev1.FailureDomains, error) { //nolint:unparam azureFailureDomains := []machinev1.AzureFailureDomain{} for _, fd := range failureDomains.List() { @@ -91,5 +91,5 @@ func buildAzureFailureDomains(failureDomains *failuredomain.Set) machinev1.Failu Platform: configv1.AzurePlatformType, } - return cpmsFailureDomain + return cpmsFailureDomain, nil } diff --git a/pkg/controllers/controlplanemachinesetgenerator/controller.go b/pkg/controllers/controlplanemachinesetgenerator/controller.go index b664222f7..a37d09c13 100644 --- a/pkg/controllers/controlplanemachinesetgenerator/controller.go +++ b/pkg/controllers/controlplanemachinesetgenerator/controller.go @@ -74,6 +74,10 @@ var ( errUnsupportedPlatform = errors.New("unsupported platform") // errNilProviderSpec is an error used when provider spec is nil. errNilProviderSpec = errors.New("provider spec is nil") + // errMixedEmptyFailureDomains is an error used when there are machines with different failure domains and one of them is empty. + errMixedEmptyFailureDomains = errors.New("an empty failure domain was found and other failure domains are not empty") + // errInconsistentProviderSpec is an error used when the provider specs are inconsistent. + errInconsistentProviderSpec = errors.New("provider specs are inconsistent") ) // ControlPlaneMachineSetGeneratorReconciler reconciles a ControlPlaneMachineSet object. @@ -239,6 +243,11 @@ func (r *ControlPlaneMachineSetGeneratorReconciler) generateControlPlaneMachineS if err != nil { return nil, fmt.Errorf("unable to generate control plane machine set spec: %w", err) } + case configv1.OpenStackPlatformType: + cpmsSpecApplyConfig, err = generateControlPlaneMachineSetOpenStackSpec(logger, machines, machineSets) + if err != nil { + return nil, fmt.Errorf("unable to generate control plane machine set spec: %w", err) + } default: logger.V(1).WithValues("platform", platformType).Info(unsupportedPlatform) return nil, errUnsupportedPlatform diff --git a/pkg/controllers/controlplanemachinesetgenerator/controller_test.go b/pkg/controllers/controlplanemachinesetgenerator/controller_test.go index 31c451672..6867aa300 100644 --- a/pkg/controllers/controlplanemachinesetgenerator/controller_test.go +++ b/pkg/controllers/controlplanemachinesetgenerator/controller_test.go @@ -25,6 +25,7 @@ import ( . "github.com/onsi/gomega" configv1 "github.com/openshift/api/config/v1" machinev1 "github.com/openshift/api/machine/v1" + machinev1alpha1 "github.com/openshift/api/machine/v1alpha1" machinev1beta1 "github.com/openshift/api/machine/v1beta1" "github.com/openshift/cluster-api-actuator-pkg/testutils" configv1resourcebuilder "github.com/openshift/cluster-api-actuator-pkg/testutils/resourcebuilder/config/v1" @@ -2192,3 +2193,724 @@ var _ = Describe("controlplanemachinesetgenerator controller on Nutanix", func() }) }) }) + +var _ = Describe("controlplanemachinesetgenerator controller on OpenStack", func() { + + var ( + az1FailureDomainBuilderOpenStack = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az1").WithRootVolume(machinev1.RootVolume{ + AvailabilityZone: "cinder-az1", + }) + + az2FailureDomainBuilderOpenStack = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az2").WithRootVolume(machinev1.RootVolume{ + AvailabilityZone: "cinder-az2", + }) + + az3FailureDomainBuilderOpenStack = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az3").WithRootVolume(machinev1.RootVolume{ + AvailabilityZone: "cinder-az3", + }) + + az4FailureDomainBuilderOpenStack = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az4") + + az5FailureDomainBuilderOpenStack = machinev1resourcebuilder.OpenStackFailureDomain().WithRootVolume(machinev1.RootVolume{ + AvailabilityZone: "cinder-az5", + }) + + defaultProviderSpecBuilderOpenStack = machinev1beta1resourcebuilder.OpenStackProviderSpec() + + az1ProviderSpecBuilderOpenStack = machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az1").WithRootVolume(&machinev1alpha1.RootVolume{ + Zone: "cinder-az1", + }) + + az2ProviderSpecBuilderOpenStack = machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az2").WithRootVolume(&machinev1alpha1.RootVolume{ + Zone: "cinder-az2", + }) + + az3ProviderSpecBuilderOpenStack = machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az3").WithRootVolume(&machinev1alpha1.RootVolume{ + Zone: "cinder-az3", + }) + + az4ProviderSpecBuilderOpenStack = machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az4") + + az5ProviderSpecBuilderOpenStack = machinev1beta1resourcebuilder.OpenStackProviderSpec().WithRootVolume(&machinev1alpha1.RootVolume{ + Zone: "cinder-az5", + }) + + cpmsEmptyFailureDomainsBuilderOpenStack = machinev1.FailureDomains{} + + cpmsNoFailureDomainsBuilderOpenStack = machinev1.FailureDomains{ + Platform: "", + } + + cpms3FailureDomainsBuilderOpenStack = machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + az1FailureDomainBuilderOpenStack, + az2FailureDomainBuilderOpenStack, + az3FailureDomainBuilderOpenStack, + ) + + cpms5FailureDomainsBuilderOpenStack = machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + az1FailureDomainBuilderOpenStack, + az2FailureDomainBuilderOpenStack, + az3FailureDomainBuilderOpenStack, + az4FailureDomainBuilderOpenStack, + az5FailureDomainBuilderOpenStack, + ) + + cpmsInactive3FDsBuilderOpenStack = machinev1resourcebuilder.ControlPlaneMachineSet(). + WithState(machinev1.ControlPlaneMachineSetStateInactive). + WithMachineTemplateBuilder( + machinev1resourcebuilder.OpenShiftMachineV1Beta1Template(). + WithProviderSpecBuilder( + az1ProviderSpecBuilderOpenStack.WithFlavor("m1.xlarge"), + ). + WithFailureDomainsBuilder(machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + az1FailureDomainBuilderOpenStack, + az2FailureDomainBuilderOpenStack, + az3FailureDomainBuilderOpenStack, + )), + ) + + cpmsInactive5FDsBuilderOpenStack = machinev1resourcebuilder.ControlPlaneMachineSet(). + WithState(machinev1.ControlPlaneMachineSetStateInactive). + WithMachineTemplateBuilder( + machinev1resourcebuilder.OpenShiftMachineV1Beta1Template(). + WithProviderSpecBuilder( + az1ProviderSpecBuilderOpenStack.WithFlavor("m1.large"), + ). + WithFailureDomainsBuilder(machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + az1FailureDomainBuilderOpenStack, + az2FailureDomainBuilderOpenStack, + az3FailureDomainBuilderOpenStack, + az4FailureDomainBuilderOpenStack, + az5FailureDomainBuilderOpenStack, + )), + ) + + cpmsActiveOutdatedBuilderOpenStack = machinev1resourcebuilder.ControlPlaneMachineSet(). + WithState(machinev1.ControlPlaneMachineSetStateActive). + WithMachineTemplateBuilder( + machinev1resourcebuilder.OpenShiftMachineV1Beta1Template(). + WithProviderSpecBuilder( + az1ProviderSpecBuilderOpenStack.WithFlavor("m1.xlarge"), + ). + WithFailureDomainsBuilder(machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + az1FailureDomainBuilderOpenStack, + az2FailureDomainBuilderOpenStack, + az3FailureDomainBuilderOpenStack, + )), + ) + + cpmsActiveUpToDateBuilderOpenStack = machinev1resourcebuilder.ControlPlaneMachineSet(). + WithState(machinev1.ControlPlaneMachineSetStateActive). + WithMachineTemplateBuilder( + machinev1resourcebuilder.OpenShiftMachineV1Beta1Template(). + WithProviderSpecBuilder( + az1ProviderSpecBuilderOpenStack.WithFlavor("m1.large"), + ). + WithFailureDomainsBuilder(cpms5FailureDomainsBuilderOpenStack), + ) + ) + + var mgrCancel context.CancelFunc + var mgrDone chan struct{} + var mgr manager.Manager + var reconciler *ControlPlaneMachineSetGeneratorReconciler + + var namespaceName string + var cpms *machinev1.ControlPlaneMachineSet + var machine0, machine1, machine2 *machinev1beta1.Machine + var machineSet0, machineSet1, machineSet2, machineSet3, machineSet4 *machinev1beta1.MachineSet + + startManager := func(mgr *manager.Manager) (context.CancelFunc, chan struct{}) { + mgrCtx, mgrCancel := context.WithCancel(context.Background()) + mgrDone := make(chan struct{}) + + go func() { + defer GinkgoRecover() + defer close(mgrDone) + + Expect((*mgr).Start(mgrCtx)).To(Succeed()) + }() + + return mgrCancel, mgrDone + } + + stopManager := func() { + mgrCancel() + // Wait for the mgrDone to be closed, which will happen once the mgr has stopped + <-mgrDone + } + + create1MachineSets := func() { + machineSetBuilder := machinev1beta1resourcebuilder.MachineSet().WithNamespace(namespaceName) + machineSet0 = machineSetBuilder.WithProviderSpecBuilder(defaultProviderSpecBuilderOpenStack).WithGenerateName("machineset-default-").Build() + + Expect(k8sClient.Create(ctx, machineSet0)).To(Succeed()) + } + + create3MachineSets := func() { + machineSetBuilder := machinev1beta1resourcebuilder.MachineSet().WithNamespace(namespaceName) + machineSet0 = machineSetBuilder.WithProviderSpecBuilder(az1ProviderSpecBuilderOpenStack).WithGenerateName("machineset-az1-").Build() + machineSet1 = machineSetBuilder.WithProviderSpecBuilder(az2ProviderSpecBuilderOpenStack).WithGenerateName("machineset-az2-").Build() + machineSet2 = machineSetBuilder.WithProviderSpecBuilder(az3ProviderSpecBuilderOpenStack).WithGenerateName("machineset-az3-").Build() + + Expect(k8sClient.Create(ctx, machineSet0)).To(Succeed()) + Expect(k8sClient.Create(ctx, machineSet1)).To(Succeed()) + Expect(k8sClient.Create(ctx, machineSet2)).To(Succeed()) + } + + create5MachineSets := func() { + create3MachineSets() + + machineSetBuilder := machinev1beta1resourcebuilder.MachineSet().WithNamespace(namespaceName) + machineSet3 = machineSetBuilder.WithProviderSpecBuilder(az4ProviderSpecBuilderOpenStack).WithGenerateName("machineset-az4-").Build() + machineSet4 = machineSetBuilder.WithProviderSpecBuilder(az5ProviderSpecBuilderOpenStack).WithGenerateName("machineset-az5-").Build() + + Expect(k8sClient.Create(ctx, machineSet3)).To(Succeed()) + Expect(k8sClient.Create(ctx, machineSet4)).To(Succeed()) + } + + create3DefaultCPMachines := func() *[]machinev1beta1.Machine { + // Create 3 control plane machines with the same Provider Spec (no failure domain), + // so then we can reliably check which machine Provider Spec is picked for the ControlPlaneMachineSet. + machineBuilder := machinev1beta1resourcebuilder.Machine().AsMaster().WithNamespace(namespaceName) + machine0 = machineBuilder.WithProviderSpecBuilder(defaultProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-0").Build() + machine1 = machineBuilder.WithProviderSpecBuilder(defaultProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-1").Build() + machine2 = machineBuilder.WithProviderSpecBuilder(defaultProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-2").Build() + + // Create Machines with some wait time between them + // to achieve staggered CreationTimestamp(s). + Expect(k8sClient.Create(ctx, machine0)).To(Succeed()) + Expect(k8sClient.Create(ctx, machine1)).To(Succeed()) + Expect(k8sClient.Create(ctx, machine2)).To(Succeed()) + + return &[]machinev1beta1.Machine{*machine0, *machine1, *machine2} + } + + create3CPMachinesWithDifferentServerGroups := func() *[]machinev1beta1.Machine { + // Create 3 control plane machines with differing Provider Specs, + // with three different server groups, so then we can reliably check that ControlPlaneMachineSet Spec won't be generated. + machineBuilder := machinev1beta1resourcebuilder.Machine().AsMaster().WithNamespace(namespaceName) + machine0 = machineBuilder.WithProviderSpecBuilder(az1ProviderSpecBuilderOpenStack.WithFlavor("m1.large").WithServerGroupName("master-latest")).WithName("master-0").Build() + machine1 = machineBuilder.WithProviderSpecBuilder(az1ProviderSpecBuilderOpenStack.WithFlavor("m1.large").WithServerGroupName("master-old")).WithName("master-1").Build() + machine2 = machineBuilder.WithProviderSpecBuilder(az1ProviderSpecBuilderOpenStack.WithFlavor("m1.large").WithServerGroupName("master-old")).WithName("master-2").Build() + + // Create Machines with some wait time between them + // to achieve staggered CreationTimestamp(s). + Expect(k8sClient.Create(ctx, machine0)).To(Succeed()) + Expect(k8sClient.Create(ctx, machine1)).To(Succeed()) + Expect(k8sClient.Create(ctx, machine2)).To(Succeed()) + + return &[]machinev1beta1.Machine{*machine0, *machine1, *machine2} + } + + create3CPMachines := func() *[]machinev1beta1.Machine { + // Create 3 control plane machines with differing Provider Specs, + // so then we can reliably check which machine Provider Spec is picked for the ControlPlaneMachineSet. + machineBuilder := machinev1beta1resourcebuilder.Machine().AsMaster().WithNamespace(namespaceName) + machine0 = machineBuilder.WithProviderSpecBuilder(az1ProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-0").Build() + machine1 = machineBuilder.WithProviderSpecBuilder(az2ProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-1").Build() + machine2 = machineBuilder.WithProviderSpecBuilder(az3ProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-2").Build() + + // Create Machines with some wait time between them + // to achieve staggered CreationTimestamp(s). + Expect(k8sClient.Create(ctx, machine0)).To(Succeed()) + Expect(k8sClient.Create(ctx, machine1)).To(Succeed()) + Expect(k8sClient.Create(ctx, machine2)).To(Succeed()) + + return &[]machinev1beta1.Machine{*machine0, *machine1, *machine2} + } + + createAZ4Machine := func() *machinev1beta1.Machine { + machineBuilder := machinev1beta1resourcebuilder.Machine().AsMaster().WithNamespace(namespaceName) + machine := machineBuilder.WithProviderSpecBuilder(az4ProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-3").Build() + + Expect(k8sClient.Create(ctx, machine)).To(Succeed()) + + return machine + } + + createAZ5Machine := func() *machinev1beta1.Machine { + machineBuilder := machinev1beta1resourcebuilder.Machine().AsMaster().WithNamespace(namespaceName) + machine := machineBuilder.WithProviderSpecBuilder(az5ProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-4").Build() + + Expect(k8sClient.Create(ctx, machine)).To(Succeed()) + + return machine + } + + BeforeEach(func() { + + By("Setting up a namespace for the test") + ns := corev1resourcebuilder.Namespace().WithGenerateName("control-plane-machine-set-controller-").Build() + Expect(k8sClient.Create(ctx, ns)).To(Succeed()) + namespaceName = ns.GetName() + + By("Setting up a new infrastructure for the test") + // Create infrastructure object. + infra := configv1resourcebuilder.Infrastructure().WithName(infrastructureName).AsOpenStack("test").Build() + infraStatus := infra.Status.DeepCopy() + Expect(k8sClient.Create(ctx, infra)).To(Succeed()) + // Update Infrastructure Status. + Eventually(komega.UpdateStatus(infra, func() { + infra.Status = *infraStatus + })).Should(Succeed()) + + By("Setting up a manager and controller") + var err error + mgr, err = ctrl.NewManager(cfg, ctrl.Options{ + Scheme: testScheme, + MetricsBindAddress: "0", + Port: testEnv.WebhookInstallOptions.LocalServingPort, + Host: testEnv.WebhookInstallOptions.LocalServingHost, + CertDir: testEnv.WebhookInstallOptions.LocalServingCertDir, + }) + Expect(err).ToNot(HaveOccurred(), "Manager should be able to be created") + reconciler = &ControlPlaneMachineSetGeneratorReconciler{ + Client: mgr.GetClient(), + Namespace: namespaceName, + } + Expect(reconciler.SetupWithManager(mgr)).To(Succeed(), "Reconciler should be able to setup with manager") + + }) + + AfterEach(func() { + testutils.CleanupResources(Default, ctx, cfg, k8sClient, namespaceName, + &corev1.Node{}, + &machinev1beta1.Machine{}, + &configv1.Infrastructure{}, + &machinev1beta1.MachineSet{}, + &machinev1.ControlPlaneMachineSet{}, + ) + }) + + JustBeforeEach(func() { + By("Starting the manager") + mgrCancel, mgrDone = startManager(&mgr) + }) + + JustAfterEach(func() { + By("Stopping the manager") + stopManager() + }) + + Context("when a Control Plane Machine Set doesn't exist", func() { + BeforeEach(func() { + cpms = &machinev1.ControlPlaneMachineSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterControlPlaneMachineSetName, + Namespace: namespaceName, + }, + } + }) + + Context("with 1 Machine Sets", func() { + BeforeEach(func() { + By("Creating MachineSets") + create1MachineSets() + }) + + Context("with 3 different server group names", func() { + BeforeEach(func() { + By("Creating Machines") + create3CPMachinesWithDifferentServerGroups() + }) + + It("should not create the ControlPlaneMachineSet", func() { + By("Checking the Control Plane Machine Set has not been created") + Eventually(komega.Get(cpms)).ShouldNot(Succeed()) + Consistently(komega.Get(cpms)).Should(MatchError("controlplanemachinesets.machine.openshift.io \"" + clusterControlPlaneMachineSetName + "\" not found")) + }) + }) + + Context("with 1 existing control plane machines", func() { + BeforeEach(func() { + By("Creating Control Plane Machines") + create3DefaultCPMachines() + }) + + It("should create the ControlPlaneMachineSet with the expected fields", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + Expect(cpms.Spec.State).To(Equal(machinev1.ControlPlaneMachineSetStateInactive)) + Expect(*cpms.Spec.Replicas).To(Equal(int32(3))) + }) + + It("should create the ControlPlaneMachineSet with the provider spec matching the youngest machine provider spec", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + // In this case expect the machine Provider Spec of the youngest machine to be used here. + // In this case it should be `machine-2` given that's the one we created last. + cpmsProviderSpec, err := providerconfig.NewProviderConfigFromMachineSpec(mgr.GetLogger(), cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.Spec) + Expect(err).To(BeNil()) + + machineProviderSpec, err := providerconfig.NewProviderConfigFromMachineSpec(mgr.GetLogger(), machine2.Spec) + Expect(err).To(BeNil()) + + openStackMachineProviderConfig := machineProviderSpec.OpenStack().Config() + Expect(cpmsProviderSpec.OpenStack().Config()).To(Equal(openStackMachineProviderConfig)) + }) + + It("should create the ControlPlaneMachineSet with no failure domain", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + + Expect(cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains).To(Equal(cpmsEmptyFailureDomainsBuilderOpenStack)) + Expect(cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains).To(Equal(cpmsNoFailureDomainsBuilderOpenStack)) + }) + + Context("With additional Machines adding additional failure domains", func() { + BeforeEach(func() { + By("Creating additional Machines") + createAZ4Machine() + createAZ5Machine() + }) + + It("should have not created the ControlPlaneMachineSet with a mix of empty and non empty failure domains", func() { + Eventually(komega.Get(cpms)).ShouldNot(Succeed()) + Consistently(komega.Get(cpms)).Should(MatchError("controlplanemachinesets.machine.openshift.io \"" + clusterControlPlaneMachineSetName + "\" not found")) + }) + }) + }) + }) + + Context("with 5 Machine Sets", func() { + BeforeEach(func() { + By("Creating MachineSets") + create5MachineSets() + }) + + Context("with 3 existing control plane machines", func() { + BeforeEach(func() { + By("Creating Control Plane Machines") + create3CPMachines() + }) + + It("should create the ControlPlaneMachineSet with the expected fields", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + Expect(cpms.Spec.State).To(Equal(machinev1.ControlPlaneMachineSetStateInactive)) + Expect(*cpms.Spec.Replicas).To(Equal(int32(3))) + }) + + It("should create the ControlPlaneMachineSet with the provider spec matching the youngest machine provider spec", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + // In this case expect the machine Provider Spec of the youngest machine to be used here. + // In this case it should be `machine-2` given that's the one we created last. + cpmsProviderSpec, err := providerconfig.NewProviderConfigFromMachineSpec(mgr.GetLogger(), cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.Spec) + Expect(err).To(BeNil()) + + machineProviderSpec, err := providerconfig.NewProviderConfigFromMachineSpec(mgr.GetLogger(), machine2.Spec) + Expect(err).To(BeNil()) + + // Remove from the machine Provider Spec the fields that won't be + // present on the ControlPlaneMachineSet Provider Spec. + openStackMachineProviderConfig := machineProviderSpec.OpenStack().Config() + if openStackMachineProviderConfig.AvailabilityZone != "" { + openStackMachineProviderConfig.AvailabilityZone = "" + } + if openStackMachineProviderConfig.RootVolume != nil && openStackMachineProviderConfig.RootVolume.Zone != "" { + openStackMachineProviderConfig.RootVolume.Zone = "" + } + + Expect(cpmsProviderSpec.OpenStack().Config()).To(Equal(openStackMachineProviderConfig)) + }) + + Context("With additional MachineSets duplicating failure domains", func() { + BeforeEach(func() { + By("Creating additional MachineSets") + create3MachineSets() + }) + + It("should create the ControlPlaneMachineSet with only one copy of each failure domain", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + + Expect(cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains).To(Equal(cpms5FailureDomainsBuilderOpenStack.BuildFailureDomains())) + }) + }) + }) + }) + + Context("with 3 Machine Sets", func() { + BeforeEach(func() { + By("Creating MachineSets") + create3MachineSets() + }) + + Context("with 3 existing control plane machines", func() { + BeforeEach(func() { + By("Creating Control Plane Machines") + create3CPMachines() + }) + + It("should create the ControlPlaneMachineSet with the expected fields", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + Expect(cpms.Spec.State).To(Equal(machinev1.ControlPlaneMachineSetStateInactive)) + Expect(*cpms.Spec.Replicas).To(Equal(int32(3))) + }) + + It("should create the ControlPlaneMachineSet with the provider spec matching the youngest machine provider spec", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + // In this case expect the machine Provider Spec of the youngest machine to be used here. + // In this case it should be `machine-2` given that's the one we created last. + cpmsProviderSpec, err := providerconfig.NewProviderConfigFromMachineSpec(mgr.GetLogger(), cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.Spec) + Expect(err).To(BeNil()) + + machineProviderSpec, err := providerconfig.NewProviderConfigFromMachineSpec(mgr.GetLogger(), machine2.Spec) + Expect(err).To(BeNil()) + + // Remove from the machine Provider Spec the fields that won't be + // present on the ControlPlaneMachineSet Provider Spec. + openStackMachineProviderConfig := machineProviderSpec.OpenStack().Config() + if openStackMachineProviderConfig.AvailabilityZone != "" { + openStackMachineProviderConfig.AvailabilityZone = "" + } + if openStackMachineProviderConfig.RootVolume != nil && openStackMachineProviderConfig.RootVolume.Zone != "" { + openStackMachineProviderConfig.RootVolume.Zone = "" + } + + Expect(cpmsProviderSpec.OpenStack().Config()).To(Equal(openStackMachineProviderConfig)) + }) + + It("should create the ControlPlaneMachineSet with only one copy of each of the 3 failure domains", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + + Expect(cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains).To(Equal(cpms3FailureDomainsBuilderOpenStack.BuildFailureDomains())) + }) + + Context("With additional Machines adding additional failure domains", func() { + BeforeEach(func() { + By("Creating additional Machines") + createAZ4Machine() + createAZ5Machine() + }) + + It("should create the ControlPlaneMachineSet with only one copy of each the 5 failure domains", func() { + By("Checking the Control Plane Machine Set has been created") + Eventually(komega.Get(cpms)).Should(Succeed()) + + Expect(cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains).To(Equal(cpms5FailureDomainsBuilderOpenStack.BuildFailureDomains())) + }) + }) + }) + }) + + Context("with only 1 existing control plane machine", func() { + var logger testutils.TestLogger + isSupportedControlPlaneMachinesNumber := false + + BeforeEach(func() { + By("Creating 1 Control Plane Machine") + machineBuilder := machinev1beta1resourcebuilder.Machine().AsMaster().WithNamespace(namespaceName) + machine2 = machineBuilder.WithProviderSpecBuilder(az3ProviderSpecBuilderOpenStack.WithFlavor("m1.large")).WithName("master-2").Build() + Expect(k8sClient.Create(ctx, machine2)).To(Succeed()) + machines := []machinev1beta1.Machine{*machine2} + + By("Invoking the check on whether the number of control plane machines in the cluster is supported") + logger = testutils.NewTestLogger() + isSupportedControlPlaneMachinesNumber = reconciler.isSupportedControlPlaneMachinesNumber(logger.Logger(), machines) + }) + + It("should have not created the ControlPlaneMachineSet", func() { + Consistently(komega.Get(cpms)).Should(MatchError("controlplanemachinesets.machine.openshift.io \"" + clusterControlPlaneMachineSetName + "\" not found")) + }) + + It("should detect the cluster has an unsupported number of control plane machines", func() { + Expect(isSupportedControlPlaneMachinesNumber).To(BeFalse()) + }) + + It("sets an appropriate log line", func() { + Eventually(logger.Entries()).Should(ConsistOf( + testutils.LogEntry{ + Level: 1, + KeysAndValues: []interface{}{"count", 1}, + Message: unsupportedNumberOfControlPlaneMachines, + }, + )) + }) + + }) + + Context("with an unsupported platform", func() { + var logger testutils.TestLogger + BeforeEach(func() { + By("Creating MachineSets") + create5MachineSets() + + By("Creating Control Plane Machines") + machines := create3CPMachines() + + logger = testutils.NewTestLogger() + generatedCPMS, err := reconciler.generateControlPlaneMachineSet(logger.Logger(), configv1.NonePlatformType, *machines, nil) + Expect(generatedCPMS).To(BeNil()) + Expect(err).To(MatchError(errUnsupportedPlatform)) + }) + + It("should have not created the ControlPlaneMachineSet", func() { + Consistently(komega.Get(cpms)).Should(MatchError("controlplanemachinesets.machine.openshift.io \"" + clusterControlPlaneMachineSetName + "\" not found")) + }) + + It("sets an appropriate log line", func() { + Eventually(logger.Entries()).Should(ConsistOf( + testutils.LogEntry{ + Level: 1, + KeysAndValues: []interface{}{"platform", configv1.NonePlatformType}, + Message: unsupportedPlatform, + }, + )) + }) + + }) + }) + + Context("when a Control Plane Machine Set exists with 5 Machine Sets", func() { + BeforeEach(func() { + By("Creating MachineSets") + create5MachineSets() + By("Creating Control Plane Machines") + create3CPMachines() + }) + + Context("with state Inactive and outdated", func() { + BeforeEach(func() { + By("Creating an outdated and Inactive Control Plane Machine Set") + // Create an Inactive ControlPlaneMachineSet with a Provider Spec that + // doesn't match the one of the youngest control plane machine (i.e. it's outdated). + cpms = cpmsInactive3FDsBuilderOpenStack.WithNamespace(namespaceName).Build() + Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) + }) + + It("should recreate ControlPlaneMachineSet with the provider spec matching the youngest machine provider spec", func() { + // In this case expect the machine Provider Spec of the youngest machine to be used here. + // In this case it should be `machine-1` given that's the one we created last. + machineProviderSpec, err := providerconfig.NewProviderConfigFromMachineSpec(mgr.GetLogger(), machine2.Spec) + Expect(err).To(BeNil()) + + // Remove from the machine Provider Spec the fields that won't be + // present on the ControlPlaneMachineSet Provider Spec. + openStackMachineProviderConfig := machineProviderSpec.OpenStack().Config() + if openStackMachineProviderConfig.AvailabilityZone != "" { + openStackMachineProviderConfig.AvailabilityZone = "" + } + if openStackMachineProviderConfig.RootVolume != nil && openStackMachineProviderConfig.RootVolume.Zone != "" { + openStackMachineProviderConfig.RootVolume.Zone = "" + } + + oldUID := cpms.UID + + Eventually(komega.Object(cpms), time.Second*30).Should( + HaveField("Spec.Template.OpenShiftMachineV1Beta1Machine.Spec", + WithTransform(func(in machinev1beta1.MachineSpec) machinev1alpha1.OpenstackProviderSpec { + mPS, err := providerconfig.NewProviderConfigFromMachineSpec(mgr.GetLogger(), in) + if err != nil { + return machinev1alpha1.OpenstackProviderSpec{} + } + + return mPS.OpenStack().Config() + }, Equal(openStackMachineProviderConfig))), + "The control plane machine provider spec should match the youngest machine's provider spec", + ) + + Expect(oldUID).NotTo(Equal(cpms.UID), + "The control plane machine set UID should differ with the old one, as it should've been deleted and recreated") + }) + + Context("With additional MachineSets duplicating failure domains", func() { + BeforeEach(func() { + By("Creating additional MachineSets") + create3MachineSets() + }) + + It("should update, but not duplicate the failure domains on the ControlPlaneMachineSet", func() { + Eventually(komega.Object(cpms)).Should(HaveField("Spec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains", Equal(cpms5FailureDomainsBuilderOpenStack.BuildFailureDomains()))) + }) + }) + }) + + Context("with state Inactive and up to date", func() { + BeforeEach(func() { + By("Creating an up to date and Inactive Control Plane Machine Set") + // Create an Inactive ControlPlaneMachineSet with a Provider Spec that + // match the youngest control plane machine (i.e. it's up to date). + cpms = cpmsInactive5FDsBuilderOpenStack.WithNamespace(namespaceName).Build() + Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) + }) + + It("should keep the ControlPlaneMachineSet up to date and not change it", func() { + cpmsVersion := cpms.ObjectMeta.ResourceVersion + Consistently(komega.Object(cpms)).Should(HaveField("ObjectMeta.ResourceVersion", cpmsVersion)) + }) + + }) + + Context("with state Active and outdated", func() { + BeforeEach(func() { + By("Creating an outdated and Active Control Plane Machine Set") + // Create an Active ControlPlaneMachineSet with a Provider Spec that + // doesn't match the one of the youngest control plane machine (i.e. it's outdated). + cpms = cpmsActiveOutdatedBuilderOpenStack.WithNamespace(namespaceName).Build() + Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) + }) + + It("should keep the CPMS unchanged", func() { + cpmsVersion := cpms.ObjectMeta.ResourceVersion + Consistently(komega.Object(cpms)).Should(HaveField("ObjectMeta.ResourceVersion", cpmsVersion)) + }) + }) + + Context("with state Active and up to date", func() { + BeforeEach(func() { + By("Creating an up to date and Active Control Plane Machine Set") + // Create an Active ControlPlaneMachineSet with a Provider Spec that + // doesn't match the one of the youngest control plane machine (i.e. it's up to date). + cpms = cpmsActiveUpToDateBuilderOpenStack.WithNamespace(namespaceName).Build() + Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) + }) + + It("should keep the ControlPlaneMachineSet unchanged", func() { + cpmsVersion := cpms.ObjectMeta.ResourceVersion + Consistently(komega.Object(cpms)).Should(HaveField("ObjectMeta.ResourceVersion", cpmsVersion)) + }) + + }) + }) + + Context("when a Control Plane Machine Set exists with 3 Machine Sets", func() { + BeforeEach(func() { + By("Creating MachineSets") + create3MachineSets() + By("Creating Control Plane Machines") + create3CPMachines() + }) + + Context("with state Inactive and outdated", func() { + BeforeEach(func() { + By("Creating an outdated and Inactive Control Plane Machine Set") + // Create an Inactive ControlPlaneMachineSet with a Provider Spec that + // doesn't match the failure domains configured. + cpms = cpmsInactive5FDsBuilderOpenStack.WithNamespace(namespaceName).Build() + Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) + }) + + It("should update ControlPlaneMachineSet with the expected failure domains", func() { + Eventually(komega.Object(cpms)).Should(HaveField("Spec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains", Equal(cpms3FailureDomainsBuilderOpenStack.BuildFailureDomains()))) + }) + + Context("With additional Machines adding additional failure domains", func() { + BeforeEach(func() { + By("Creating additional MachineSets") + createAZ4Machine() + createAZ5Machine() + }) + + It("should include additional failure domains from Machines, not present in the Machine Sets", func() { + Eventually(komega.Object(cpms)).Should(HaveField("Spec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains", Equal(cpms5FailureDomainsBuilderOpenStack.BuildFailureDomains()))) + }) + }) + }) + }) +}) diff --git a/pkg/controllers/controlplanemachinesetgenerator/gcp.go b/pkg/controllers/controlplanemachinesetgenerator/gcp.go index 990c0fb30..61807e166 100644 --- a/pkg/controllers/controlplanemachinesetgenerator/gcp.go +++ b/pkg/controllers/controlplanemachinesetgenerator/gcp.go @@ -79,7 +79,7 @@ func buildControlPlaneMachineSetGCPMachineSpec(logger logr.Logger, machines []ma } // buildGCPFailureDomains builds a GCP flavored FailureDomains for the ControlPlaneMachineSet. -func buildGCPFailureDomains(failureDomains *failuredomain.Set) machinev1.FailureDomains { +func buildGCPFailureDomains(failureDomains *failuredomain.Set) (machinev1.FailureDomains, error) { //nolint:unparam gcpFailureDomains := []machinev1.GCPFailureDomain{} for _, fd := range failureDomains.List() { @@ -91,5 +91,5 @@ func buildGCPFailureDomains(failureDomains *failuredomain.Set) machinev1.Failure Platform: configv1.GCPPlatformType, } - return cpmsFailureDomains + return cpmsFailureDomains, nil } diff --git a/pkg/controllers/controlplanemachinesetgenerator/openstack.go b/pkg/controllers/controlplanemachinesetgenerator/openstack.go new file mode 100644 index 000000000..429cedb8a --- /dev/null +++ b/pkg/controllers/controlplanemachinesetgenerator/openstack.go @@ -0,0 +1,146 @@ +/* +Copyright 2022 Red Hat, Inc. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package controlplanemachinesetgenerator + +import ( + "encoding/json" + "fmt" + + "github.com/go-logr/logr" + configv1 "github.com/openshift/api/config/v1" + machinev1 "github.com/openshift/api/machine/v1" + machinev1beta1 "github.com/openshift/api/machine/v1beta1" + machinev1builder "github.com/openshift/client-go/machine/applyconfigurations/machine/v1" + machinev1beta1builder "github.com/openshift/client-go/machine/applyconfigurations/machine/v1beta1" + "github.com/openshift/cluster-control-plane-machine-set-operator/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain" + "github.com/openshift/cluster-control-plane-machine-set-operator/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig" + "k8s.io/apimachinery/pkg/runtime" +) + +// checkOpenStackMachinesServerGroups checks if all machines have the same ServerGroup (the reference is the newest machine's ServerGroup). +func checkOpenStackMachinesServerGroups(logger logr.Logger, machines []machinev1beta1.Machine) error { + newestMachineProviderConfig, err := providerconfig.NewProviderConfigFromMachineSpec(logger, machines[0].Spec) + if err != nil { + return fmt.Errorf("failed to extract newest machine's OpenStack providerSpec: %w", err) + } + + newestOpenStackProviderSpec := newestMachineProviderConfig.OpenStack().Config() + newestServerGroup := newestOpenStackProviderSpec.ServerGroupName + + for _, machine := range machines { + // get the providerSpec from the machine + providerConfig, err := providerconfig.NewProviderConfigFromMachineSpec(logger, machine.Spec) + if err != nil { + return fmt.Errorf("failed to extract machine's OpenStack providerSpec: %w", err) + } + + openStackProviderSpec := providerConfig.OpenStack().Config() + // Return an error if the ServerGroup is not the same as the newest machine's ServerGroup. + if openStackProviderSpec.ServerGroupName != newestServerGroup { + return fmt.Errorf("%w: machine %s has a different ServerGroup than the newest machine. Check this KCS article for more information: https://access.redhat.com/solutions/7013893", errInconsistentProviderSpec, machine.Name) + } + } + + return nil +} + +// generateControlPlaneMachineSetOpenStackSpec generates an OpenStack flavored ControlPlaneMachineSet Spec. +func generateControlPlaneMachineSetOpenStackSpec(logger logr.Logger, machines []machinev1beta1.Machine, machineSets []machinev1beta1.MachineSet) (machinev1builder.ControlPlaneMachineSetSpecApplyConfiguration, error) { + // We want to make sure that the machines are ready to be used for generating a ControlPlaneMachineSet. + if err := checkOpenStackMachinesServerGroups(logger, machines); err != nil { + return machinev1builder.ControlPlaneMachineSetSpecApplyConfiguration{}, fmt.Errorf("failed to check OpenStack machines ServerGroup: %w", err) + } + + controlPlaneMachineSetMachineFailureDomainsApplyConfig, err := buildFailureDomains(logger, machineSets, machines) + if err != nil { + return machinev1builder.ControlPlaneMachineSetSpecApplyConfiguration{}, fmt.Errorf("failed to build ControlPlaneMachineSet's OpenStack failure domains: %w", err) + } + + controlPlaneMachineSetMachineSpecApplyConfig, err := buildControlPlaneMachineSetOpenStackMachineSpec(logger, machines) + if err != nil { + return machinev1builder.ControlPlaneMachineSetSpecApplyConfiguration{}, fmt.Errorf("failed to build ControlPlaneMachineSet's OpenStack spec: %w", err) + } + + // We want to work with the newest machine. + controlPlaneMachineSetApplyConfigSpec := genericControlPlaneMachineSetSpec(replicas, machines[0].ObjectMeta.Labels[clusterIDLabelKey]) + controlPlaneMachineSetApplyConfigSpec.Template.OpenShiftMachineV1Beta1Machine.FailureDomains = controlPlaneMachineSetMachineFailureDomainsApplyConfig + controlPlaneMachineSetApplyConfigSpec.Template.OpenShiftMachineV1Beta1Machine.Spec = controlPlaneMachineSetMachineSpecApplyConfig + + return controlPlaneMachineSetApplyConfigSpec, nil +} + +// buildControlPlaneMachineSetOpenStackMachineSpec builds an OpenStack flavored MachineSpec for the ControlPlaneMachineSet. +func buildControlPlaneMachineSetOpenStackMachineSpec(logger logr.Logger, machines []machinev1beta1.Machine) (*machinev1beta1builder.MachineSpecApplyConfiguration, error) { + // The machines slice is sorted by the creation time. + // We want to get the provider config for the newest machine. + providerConfig, err := providerconfig.NewProviderConfigFromMachineSpec(logger, machines[0].Spec) + if err != nil { + return nil, fmt.Errorf("failed to extract machine's OpenStack providerSpec: %w", err) + } + + openStackProviderSpec := providerConfig.OpenStack().Config() + + // Remove field related to the failure domain. + openStackProviderSpec.AvailabilityZone = "" + + if openStackProviderSpec.RootVolume != nil { + openStackProviderSpec.RootVolume.Zone = "" + } + + rawBytes, err := json.Marshal(openStackProviderSpec) + if err != nil { + return nil, fmt.Errorf("error marshalling OpenStack providerSpec: %w", err) + } + + re := runtime.RawExtension{ + Raw: rawBytes, + } + + return &machinev1beta1builder.MachineSpecApplyConfiguration{ + ProviderSpec: &machinev1beta1builder.ProviderSpecApplyConfiguration{Value: &re}, + }, nil +} + +// buildOpenStackFailureDomains builds an OpenStack flavored FailureDomains for the ControlPlaneMachineSet. +func buildOpenStackFailureDomains(failureDomains *failuredomain.Set) (machinev1.FailureDomains, error) { + openStackFailureDomains := []machinev1.OpenStackFailureDomain{} + for _, fd := range failureDomains.List() { + openStackFailureDomains = append(openStackFailureDomains, fd.OpenStack()) + } + + emptyOpenStackFailureDomain := machinev1.OpenStackFailureDomain{} + emptyFailureDomain := machinev1.FailureDomains{} + + // We want to make sure that if a failure domain is empty, it is the only one. + if len(openStackFailureDomains) > 1 { + for _, fd := range openStackFailureDomains { + if fd == emptyOpenStackFailureDomain { + return emptyFailureDomain, fmt.Errorf("error building OpenStack failure domains: %w", errMixedEmptyFailureDomains) + } + } + } + + // We want to make sure that if a failure domain is empty, we don't create it. + if len(openStackFailureDomains) == 1 && openStackFailureDomains[0] == emptyOpenStackFailureDomain { + openStackFailureDomains = nil + } + + return machinev1.FailureDomains{ + OpenStack: openStackFailureDomains, + Platform: configv1.OpenStackPlatformType, + }, nil +} diff --git a/pkg/controllers/controlplanemachinesetgenerator/utils.go b/pkg/controllers/controlplanemachinesetgenerator/utils.go index cfb98113a..f5e9b9a25 100644 --- a/pkg/controllers/controlplanemachinesetgenerator/utils.go +++ b/pkg/controllers/controlplanemachinesetgenerator/utils.go @@ -165,6 +165,8 @@ func convertViaJSON(in, out interface{}) error { } // buildFailureDomains builds a flavored FailureDomain for the ControlPlaneMachineSet according to what platform we are on. +// +//nolint:cyclop func buildFailureDomains(logger logr.Logger, machineSets []machinev1beta1.MachineSet, machines []machinev1beta1.Machine) (*machinev1builder.FailureDomainsApplyConfiguration, error) { // Fetch failure domains from the machines machineFailureDomains, err := providerconfig.ExtractFailureDomainsFromMachines(logger, machines) @@ -188,11 +190,20 @@ func buildFailureDomains(logger logr.Logger, machineSets []machinev1beta1.Machin switch machineFailureDomains[0].Type() { case configv1.AWSPlatformType: - cpmsFailureDomain = buildAWSFailureDomains(failureDomains) + cpmsFailureDomain, _ = buildAWSFailureDomains(failureDomains) case configv1.AzurePlatformType: - cpmsFailureDomain = buildAzureFailureDomains(failureDomains) + cpmsFailureDomain, _ = buildAzureFailureDomains(failureDomains) case configv1.GCPPlatformType: - cpmsFailureDomain = buildGCPFailureDomains(failureDomains) + cpmsFailureDomain, _ = buildGCPFailureDomains(failureDomains) + case configv1.OpenStackPlatformType: + cpmsFailureDomain, err = buildOpenStackFailureDomains(failureDomains) + if err != nil { + return nil, fmt.Errorf("failed to build OpenStack failure domains: %w", err) + } + + if cpmsFailureDomain.OpenStack == nil || cpmsFailureDomain.Platform == "" { + return nil, nil //nolint:nilnil + } default: return nil, fmt.Errorf("%w: %sFailureDomain{}", errUnsupportedPlatform, machineFailureDomains[0].Type()) } diff --git a/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain/failuredomain.go b/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain/failuredomain.go index 3dd3c305b..10df2e054 100644 --- a/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain/failuredomain.go +++ b/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain/failuredomain.go @@ -59,6 +59,9 @@ type FailureDomain interface { // GCP returns the GCPFailureDomain if the platform type is GCP. GCP() machinev1.GCPFailureDomain + // OpenStack returns the OpenStackFailureDomain if the platform type is OpenStack. + OpenStack() machinev1.OpenStackFailureDomain + // Equal compares the underlying failure domain. Equal(other FailureDomain) bool } @@ -67,9 +70,10 @@ type FailureDomain interface { type failureDomain struct { platformType configv1.PlatformType - aws machinev1.AWSFailureDomain - azure machinev1.AzureFailureDomain - gcp machinev1.GCPFailureDomain + aws machinev1.AWSFailureDomain + azure machinev1.AzureFailureDomain + gcp machinev1.GCPFailureDomain + openstack machinev1.OpenStackFailureDomain } // String returns a string representation of the failure domain. @@ -81,6 +85,8 @@ func (f failureDomain) String() string { return azureFailureDomainToString(f.azure) case configv1.GCPPlatformType: return gcpFailureDomainToString(f.gcp) + case configv1.OpenStackPlatformType: + return openstackFailureDomainToString(f.openstack) default: return fmt.Sprintf("%sFailureDomain{}", f.platformType) } @@ -106,6 +112,11 @@ func (f failureDomain) GCP() machinev1.GCPFailureDomain { return f.gcp } +// OpenStack returns the OpenStackFailureDomain if the platform type is OpenStack. +func (f failureDomain) OpenStack() machinev1.OpenStackFailureDomain { + return f.openstack +} + // Equal compares the underlying failure domain. func (f failureDomain) Equal(other FailureDomain) bool { if other == nil { @@ -123,6 +134,8 @@ func (f failureDomain) Equal(other FailureDomain) bool { return f.azure == other.Azure() case configv1.GCPPlatformType: return f.gcp == other.GCP() + case configv1.OpenStackPlatformType: + return reflect.DeepEqual(f.openstack, other.OpenStack()) } return true @@ -138,6 +151,8 @@ func NewFailureDomains(failureDomains machinev1.FailureDomains) ([]FailureDomain return newAzureFailureDomains(failureDomains) case configv1.GCPPlatformType: return newGCPFailureDomains(failureDomains) + case configv1.OpenStackPlatformType: + return newOpenStackFailureDomains(failureDomains) case configv1.PlatformType(""): // An empty failure domains definition is allowed. return nil, nil @@ -188,6 +203,21 @@ func newGCPFailureDomains(failureDomains machinev1.FailureDomains) ([]FailureDom return foundFailureDomains, nil } +// newOpenStackFailureDomains constructs a slice of OpenStack FailureDomain from machinev1.FailureDomains. +func newOpenStackFailureDomains(failureDomains machinev1.FailureDomains) ([]FailureDomain, error) { + foundFailureDomains := []FailureDomain{} + + if len(failureDomains.OpenStack) == 0 { + return foundFailureDomains, errMissingFailureDomain + } + + for _, failureDomain := range failureDomains.OpenStack { + foundFailureDomains = append(foundFailureDomains, NewOpenStackFailureDomain(failureDomain)) + } + + return foundFailureDomains, nil +} + // NewAWSFailureDomain creates an AWS failure domain from the machinev1.AWSFailureDomain. // Note this is exported to allow other packages to construct individual failure domains // in tests. @@ -214,6 +244,14 @@ func NewGCPFailureDomain(fd machinev1.GCPFailureDomain) FailureDomain { } } +// NewOpenStackFailureDomain creates an OpenStack failure domain from the machinev1.OpenStackFailureDomain. +func NewOpenStackFailureDomain(fd machinev1.OpenStackFailureDomain) FailureDomain { + return &failureDomain{ + platformType: configv1.OpenStackPlatformType, + openstack: fd, + } +} + // NewGenericFailureDomain creates a dummy failure domain for generic platforms that don't support failure domains. func NewGenericFailureDomain() FailureDomain { return failureDomain{} @@ -276,3 +314,27 @@ func gcpFailureDomainToString(fd machinev1.GCPFailureDomain) string { return unknownFailureDomain } + +// openstackFailureDomainToString converts the OpenStackFailureDomain into a string. +func openstackFailureDomainToString(fd machinev1.OpenStackFailureDomain) string { + displayRootVolume := func(rootVolume machinev1.RootVolume) string { + return fmt.Sprintf("{AvailabilityZone:%s}", rootVolume.AvailabilityZone) + } + + // AvailabilityZone only + if fd.AvailabilityZone != "" && fd.RootVolume == nil { + return fmt.Sprintf("OpenStackFailureDomain{AvailabilityZone:%s}", fd.AvailabilityZone) + } + + // RootVolume only + if fd.AvailabilityZone == "" && fd.RootVolume != nil { + return fmt.Sprintf("OpenStackFailureDomain{RootVolume:%s}", displayRootVolume(*fd.RootVolume)) + } + + // AvailabilityZone and RootVolume + if fd.AvailabilityZone != "" && fd.RootVolume != nil { + return fmt.Sprintf("OpenStackFailureDomain{AvailabilityZone:%s, RootVolume:%s}", fd.AvailabilityZone, displayRootVolume(*fd.RootVolume)) + } + + return unknownFailureDomain +} diff --git a/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain/failuredomain_test.go b/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain/failuredomain_test.go index 2d0986d16..3c31ca9f9 100644 --- a/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain/failuredomain_test.go +++ b/pkg/machineproviders/providers/openshift/machine/v1beta1/failuredomain/failuredomain_test.go @@ -173,6 +173,49 @@ var _ = Describe("FailureDomains", func() { }) }) + Context("With OpenStack failure domain configuration", func() { + var failureDomains []FailureDomain + var err error + + BeforeEach(func() { + config := machinev1resourcebuilder.OpenStackFailureDomains().BuildFailureDomains() + + failureDomains, err = NewFailureDomains(config) + }) + + It("should not error", func() { + Expect(err).ToNot(HaveOccurred()) + }) + + It("should construct a list of failure domains", func() { + Expect(failureDomains).To(ConsistOf( + HaveField("String()", "OpenStackFailureDomain{AvailabilityZone:nova-az0, RootVolume:{AvailabilityZone:cinder-az0}}"), + HaveField("String()", "OpenStackFailureDomain{AvailabilityZone:nova-az1, RootVolume:{AvailabilityZone:cinder-az1}}"), + HaveField("String()", "OpenStackFailureDomain{AvailabilityZone:nova-az2, RootVolume:{AvailabilityZone:cinder-az2}}"), + )) + }) + }) + + Context("With invalid OpenStack failure domain configuration", func() { + var failureDomains []FailureDomain + var err error + + BeforeEach(func() { + config := machinev1resourcebuilder.OpenStackFailureDomains().BuildFailureDomains() + config.OpenStack = nil + + failureDomains, err = NewFailureDomains(config) + }) + + It("returns an error", func() { + Expect(err).To(MatchError("missing failure domain configuration")) + }) + + It("returns an empty list of failure domains", func() { + Expect(failureDomains).To(BeEmpty()) + }) + }) + Context("With an unsupported platform type", func() { var failureDomains []FailureDomain var err error @@ -296,9 +339,64 @@ var _ = Describe("FailureDomains", func() { }) }) + Context("an OpenStack failure domain", func() { + var fd failureDomain + var filterRootVolume = machinev1.RootVolume{ + AvailabilityZone: "cinder-az0", + } + + BeforeEach(func() { + fd = failureDomain{ + platformType: configv1.OpenStackPlatformType, + } + }) + + Context("with a Compute and Storage availability zone", func() { + BeforeEach(func() { + fd.openstack = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az0"). + WithRootVolume(filterRootVolume).Build() + }) + + It("returns the Compute and Storage availability zones for String()", func() { + Expect(fd.String()).To(Equal("OpenStackFailureDomain{AvailabilityZone:nova-az0, RootVolume:{AvailabilityZone:cinder-az0}}")) + }) + }) + + Context("with a Compute availability zone only", func() { + BeforeEach(func() { + fd.openstack = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az0").Build() + }) + + It("returns the Compute availability zone for String()", func() { + Expect(fd.String()).To(Equal("OpenStackFailureDomain{AvailabilityZone:nova-az0}")) + }) + }) + Context("with a Storage availability zone only", func() { + BeforeEach(func() { + fd.openstack = machinev1resourcebuilder.OpenStackFailureDomain().WithRootVolume(filterRootVolume).Build() + }) + + It("returns the Storage availability zone for String()", func() { + Expect(fd.String()).To(Equal("OpenStackFailureDomain{RootVolume:{AvailabilityZone:cinder-az0}}")) + }) + }) + Context("with no availability zones", func() { + BeforeEach(func() { + fd.openstack = machinev1resourcebuilder.OpenStackFailureDomain().Build() + }) + + It("returns for String()", func() { + Expect(fd.String()).To(Equal("")) + }) + }) + }) + Context("Equal", func() { var fd1 failureDomain var fd2 failureDomain + var filterRootVolume = machinev1.RootVolume{ + AvailabilityZone: "cinder-az0", + } Context("With two identical AWS failure domains", func() { BeforeEach(func() { @@ -381,6 +479,23 @@ var _ = Describe("FailureDomains", func() { }) }) + Context("With two identical OpenStack failure domains", func() { + BeforeEach(func() { + fd1 = failureDomain{ + platformType: configv1.OpenStackPlatformType, + openstack: machinev1resourcebuilder.OpenStackFailureDomain().WithRootVolume(filterRootVolume).Build(), + } + fd2 = failureDomain{ + platformType: configv1.OpenStackPlatformType, + openstack: machinev1resourcebuilder.OpenStackFailureDomain().WithRootVolume(filterRootVolume).Build(), + } + }) + + It("returns true", func() { + Expect(fd1.Equal(fd2)).To(BeTrue()) + }) + }) + Context("With two different Azure failure domains", func() { BeforeEach(func() { fd1 = failureDomain{ @@ -398,6 +513,23 @@ var _ = Describe("FailureDomains", func() { }) }) + Context("With two different OpenStack failure domains", func() { + BeforeEach(func() { + fd1 = failureDomain{ + platformType: configv1.OpenStackPlatformType, + openstack: machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az0").Build(), + } + fd2 = failureDomain{ + platformType: configv1.GCPPlatformType, + openstack: machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az1").Build(), + } + }) + + It("returns false", func() { + Expect(fd1.Equal(fd2)).To(BeFalse()) + }) + }) + Context("With different failure domains platform", func() { BeforeEach(func() { fd1 = failureDomain{ diff --git a/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/openstack.go b/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/openstack.go new file mode 100644 index 000000000..c3b645ffc --- /dev/null +++ b/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/openstack.go @@ -0,0 +1,96 @@ +/* +Copyright 2022 Red Hat, Inc. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package providerconfig + +import ( + "fmt" + + "github.com/go-logr/logr" + v1 "github.com/openshift/api/config/v1" + machinev1 "github.com/openshift/api/machine/v1" + machinev1alpha1 "github.com/openshift/api/machine/v1alpha1" + "k8s.io/apimachinery/pkg/runtime" +) + +// OpenStackProviderConfig holds the provider spec of an OpenStack Machine. +// It allows external code to extract and inject failure domain information, +// as well as gathering the stored config. +type OpenStackProviderConfig struct { + providerConfig machinev1alpha1.OpenstackProviderSpec +} + +// InjectFailureDomain returns a new OpenStackProviderConfig configured with the failure domain +// information provided. +func (a OpenStackProviderConfig) InjectFailureDomain(fd machinev1.OpenStackFailureDomain) OpenStackProviderConfig { + newOpenStackProviderConfig := a + + if fd.AvailabilityZone != "" { + newOpenStackProviderConfig.providerConfig.AvailabilityZone = fd.AvailabilityZone + } + + if fd.RootVolume != nil && newOpenStackProviderConfig.providerConfig.RootVolume != nil && fd.RootVolume.AvailabilityZone != "" { + newOpenStackProviderConfig.providerConfig.RootVolume.Zone = fd.RootVolume.AvailabilityZone + } + + return newOpenStackProviderConfig +} + +// ExtractFailureDomain returns an OpenStackFailureDomain based on the failure domain +// information stored within the OpenStackProviderConfig. +func (a OpenStackProviderConfig) ExtractFailureDomain() machinev1.OpenStackFailureDomain { + rootVolume := func(pc machinev1alpha1.OpenstackProviderSpec) *machinev1.RootVolume { + if pc.RootVolume != nil && pc.RootVolume.Zone != "" { + rootVolume := &machinev1.RootVolume{} + rootVolume.AvailabilityZone = a.providerConfig.RootVolume.Zone + + return rootVolume + } else { + return nil + } + } + + return machinev1.OpenStackFailureDomain{ + AvailabilityZone: a.providerConfig.AvailabilityZone, + RootVolume: rootVolume(a.providerConfig), + } +} + +// Config returns the stored OpenStackMachineProviderSpec. +func (a OpenStackProviderConfig) Config() machinev1alpha1.OpenstackProviderSpec { + return a.providerConfig +} + +// newOpenStackProviderConfig creates an OpenStack type ProviderConfig from the raw extension. +// It should return an error if the provided RawExtension does not represent +// an OpenStackMachineProviderConfig. +func newOpenStackProviderConfig(logger logr.Logger, raw *runtime.RawExtension) (ProviderConfig, error) { + openstackProviderSpec := machinev1alpha1.OpenstackProviderSpec{} + if err := checkForUnknownFieldsInProviderSpecAndUnmarshal(logger, raw, &openstackProviderSpec); err != nil { + return nil, fmt.Errorf("failed to check for unknown fields in the provider spec: %w", err) + } + + openstackProviderConfig := OpenStackProviderConfig{ + providerConfig: openstackProviderSpec, + } + + config := providerConfig{ + platformType: v1.OpenStackPlatformType, + openstack: openstackProviderConfig, + } + + return config, nil +} diff --git a/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/openstack_test.go b/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/openstack_test.go new file mode 100644 index 000000000..4a23e610e --- /dev/null +++ b/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/openstack_test.go @@ -0,0 +1,127 @@ +/* +Copyright 2023 Red Hat, Inc. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package providerconfig + +import ( + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + + configv1 "github.com/openshift/api/config/v1" + machinev1 "github.com/openshift/api/machine/v1" + machinev1alpha1 "github.com/openshift/api/machine/v1alpha1" + "github.com/openshift/cluster-api-actuator-pkg/testutils" + machinev1resourcebuilder "github.com/openshift/cluster-api-actuator-pkg/testutils/resourcebuilder/machine/v1" + machinev1beta1resourcebuilder "github.com/openshift/cluster-api-actuator-pkg/testutils/resourcebuilder/machine/v1beta1" +) + +var _ = Describe("OpenStack Provider Config", func() { + var logger testutils.TestLogger + + var providerConfig OpenStackProviderConfig + + novaZone1 := "nova-az1" + novaZone2 := "nova-az2" + cinderZone1 := "cinder-az1" + + machinev1alpha1RootVolume1 := &machinev1alpha1.RootVolume{ + Zone: cinderZone1, + } + machinev1RootVolume1 := machinev1.RootVolume{ + AvailabilityZone: cinderZone1, + } + + BeforeEach(func() { + machineProviderConfig := machinev1beta1resourcebuilder.OpenStackProviderSpec(). + WithZone(novaZone1). + WithRootVolume(machinev1alpha1RootVolume1). + Build() + + providerConfig = OpenStackProviderConfig{ + providerConfig: *machineProviderConfig, + } + + logger = testutils.NewTestLogger() + }) + + Context("ExtractFailureDomain", func() { + It("returns the configured failure domain", func() { + expected := machinev1resourcebuilder.OpenStackFailureDomain(). + WithComputeAvailabilityZone(novaZone1). + WithRootVolume(machinev1RootVolume1). + Build() + + Expect(providerConfig.ExtractFailureDomain()).To(Equal(expected)) + }) + }) + + Context("when the failuredomain is changed after initialisation", func() { + var changedProviderConfig OpenStackProviderConfig + + BeforeEach(func() { + changedFailureDomain := machinev1resourcebuilder.OpenStackFailureDomain(). + WithComputeAvailabilityZone(novaZone2). + WithRootVolume(machinev1RootVolume1). + Build() + + changedProviderConfig = providerConfig.InjectFailureDomain(changedFailureDomain) + }) + + Context("ExtractFailureDomain", func() { + It("returns the changed failure domain from the changed config", func() { + expected := machinev1resourcebuilder.OpenStackFailureDomain(). + WithComputeAvailabilityZone(novaZone2). + WithRootVolume(machinev1RootVolume1). + Build() + + Expect(changedProviderConfig.ExtractFailureDomain()).To(Equal(expected)) + }) + + It("returns the original failure domain from the original config", func() { + expected := machinev1resourcebuilder.OpenStackFailureDomain(). + WithComputeAvailabilityZone(novaZone1). + WithRootVolume(machinev1RootVolume1). + Build() + + Expect(providerConfig.ExtractFailureDomain()).To(Equal(expected)) + }) + }) + }) + + Context("newOpenStackProviderConfig", func() { + var providerConfig ProviderConfig + var expectedOpenStackConfig machinev1alpha1.OpenstackProviderSpec + + BeforeEach(func() { + configBuilder := machinev1beta1resourcebuilder.OpenStackProviderSpec() + expectedOpenStackConfig = *configBuilder.Build() + rawConfig := configBuilder.BuildRawExtension() + + var err error + providerConfig, err = newOpenStackProviderConfig(logger.Logger(), rawConfig) + Expect(err).ToNot(HaveOccurred()) + }) + + It("sets the type to OpenStack", func() { + Expect(providerConfig.Type()).To(Equal(configv1.OpenStackPlatformType)) + }) + + It("returns the correct OpenStack config", func() { + Expect(providerConfig.OpenStack()).ToNot(BeNil()) + Expect(providerConfig.OpenStack().Config()).To(Equal(expectedOpenStackConfig)) + }) + }) +}) diff --git a/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/providerconfig.go b/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/providerconfig.go index 3542b0847..dd73114d0 100644 --- a/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/providerconfig.go +++ b/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/providerconfig.go @@ -85,6 +85,9 @@ type ProviderConfig interface { // Nutanix returns the NutanixProviderConfig if the platform type is Nutanix. Nutanix() NutanixProviderConfig + // OpenStack returns the OpenStackProviderConfig if the platform type is OpenStack. + OpenStack() OpenStackProviderConfig + // Generic returns the GenericProviderConfig if we are on a platform that is using generic provider abstraction. Generic() GenericProviderConfig } @@ -123,6 +126,8 @@ func newProviderConfigFromProviderSpec(logger logr.Logger, providerSpec machinev return newGCPProviderConfig(logger, providerSpec.Value) case configv1.NutanixPlatformType: return newNutanixProviderConfig(logger, providerSpec.Value) + case configv1.OpenStackPlatformType: + return newOpenStackProviderConfig(logger, providerSpec.Value) case configv1.NonePlatformType: return nil, fmt.Errorf("%w: %s", errUnsupportedPlatformType, platformType) default: @@ -138,6 +143,7 @@ type providerConfig struct { gcp GCPProviderConfig nutanix NutanixProviderConfig generic GenericProviderConfig + openstack OpenStackProviderConfig } // InjectFailureDomain is used to inject a failure domain into the ProviderConfig. @@ -157,6 +163,8 @@ func (p providerConfig) InjectFailureDomain(fd failuredomain.FailureDomain) (Pro newConfig.azure = p.Azure().InjectFailureDomain(fd.Azure()) case configv1.GCPPlatformType: newConfig.gcp = p.GCP().InjectFailureDomain(fd.GCP()) + case configv1.OpenStackPlatformType: + newConfig.openstack = p.OpenStack().InjectFailureDomain(fd.OpenStack()) case configv1.NonePlatformType: return nil, fmt.Errorf("%w: %s", errUnsupportedPlatformType, p.platformType) } @@ -173,6 +181,8 @@ func (p providerConfig) ExtractFailureDomain() failuredomain.FailureDomain { return failuredomain.NewAzureFailureDomain(p.Azure().ExtractFailureDomain()) case configv1.GCPPlatformType: return failuredomain.NewGCPFailureDomain(p.GCP().ExtractFailureDomain()) + case configv1.OpenStackPlatformType: + return failuredomain.NewOpenStackFailureDomain(p.OpenStack().ExtractFailureDomain()) case configv1.NonePlatformType: return nil default: @@ -182,6 +192,8 @@ func (p providerConfig) ExtractFailureDomain() failuredomain.FailureDomain { // Diff compares two ProviderConfigs and returns a list of differences, // or nil if there are none. +// +//nolint:dupl func (p providerConfig) Diff(other ProviderConfig) ([]string, error) { if other == nil { return nil, nil @@ -200,6 +212,8 @@ func (p providerConfig) Diff(other ProviderConfig) ([]string, error) { return deep.Equal(p.gcp.providerConfig, other.GCP().providerConfig), nil case configv1.NutanixPlatformType: return deep.Equal(p.nutanix.providerConfig, other.Nutanix().providerConfig), nil + case configv1.OpenStackPlatformType: + return deep.Equal(p.openstack.providerConfig, other.OpenStack().providerConfig), nil case configv1.NonePlatformType: return nil, errUnsupportedPlatformType default: @@ -208,6 +222,8 @@ func (p providerConfig) Diff(other ProviderConfig) ([]string, error) { } // Equal compares two ProviderConfigs to determine whether or not they are equal. +// +//nolint:dupl func (p providerConfig) Equal(other ProviderConfig) (bool, error) { if other == nil { return false, nil @@ -226,6 +242,8 @@ func (p providerConfig) Equal(other ProviderConfig) (bool, error) { return reflect.DeepEqual(p.gcp.providerConfig, other.GCP().providerConfig), nil case configv1.NutanixPlatformType: return reflect.DeepEqual(p.nutanix.providerConfig, other.Nutanix().providerConfig), nil + case configv1.OpenStackPlatformType: + return reflect.DeepEqual(p.openstack.providerConfig, other.OpenStack().providerConfig), nil case configv1.NonePlatformType: return false, errUnsupportedPlatformType default: @@ -249,6 +267,8 @@ func (p providerConfig) RawConfig() ([]byte, error) { rawConfig, err = json.Marshal(p.gcp.providerConfig) case configv1.NutanixPlatformType: rawConfig, err = json.Marshal(p.nutanix.providerConfig) + case configv1.OpenStackPlatformType: + rawConfig, err = json.Marshal(p.openstack.providerConfig) case configv1.NonePlatformType: return nil, errUnsupportedPlatformType default: @@ -287,6 +307,11 @@ func (p providerConfig) Nutanix() NutanixProviderConfig { return p.nutanix } +// OpenStack returns the OpenStackProviderConfig if the platform type is OpenStack. +func (p providerConfig) OpenStack() OpenStackProviderConfig { + return p.openstack +} + // Generic returns the GenericProviderConfig if the platform type is generic. func (p providerConfig) Generic() GenericProviderConfig { return p.generic @@ -300,6 +325,7 @@ func getPlatformTypeFromProviderSpecKind(kind string) configv1.PlatformType { "AzureMachineProviderSpec": configv1.AzurePlatformType, "GCPMachineProviderSpec": configv1.GCPPlatformType, "NutanixMachineProviderConfig": configv1.NutanixPlatformType, + "OpenstackProviderSpec": configv1.OpenStackPlatformType, } platformType, ok := providerSpecKindToPlatformType[kind] diff --git a/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/providerconfig_test.go b/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/providerconfig_test.go index 65737efde..b17cd0605 100644 --- a/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/providerconfig_test.go +++ b/pkg/machineproviders/providers/openshift/machine/v1beta1/providerconfig/providerconfig_test.go @@ -23,6 +23,7 @@ import ( configv1 "github.com/openshift/api/config/v1" machinev1 "github.com/openshift/api/machine/v1" + machinev1alpha1 "github.com/openshift/api/machine/v1alpha1" machinev1beta1 "github.com/openshift/api/machine/v1beta1" "github.com/openshift/cluster-api-actuator-pkg/testutils" "github.com/openshift/cluster-api-actuator-pkg/testutils/resourcebuilder" @@ -117,6 +118,18 @@ var _ = Describe("Provider Config", func() { providerSpecBuilder: machinev1beta1resourcebuilder.GCPProviderSpec(), providerConfigMatcher: HaveField("GCP().Config()", *machinev1beta1resourcebuilder.GCPProviderSpec().Build()), }), + Entry("with an OpenStack config with failure domains", providerConfigTableInput{ + expectedPlatformType: configv1.OpenStackPlatformType, + failureDomainsBuilder: machinev1resourcebuilder.OpenStackFailureDomains(), + providerSpecBuilder: machinev1beta1resourcebuilder.OpenStackProviderSpec(), + providerConfigMatcher: HaveField("OpenStack().Config()", *machinev1beta1resourcebuilder.OpenStackProviderSpec().Build()), + }), + Entry("with an OpenStack config without failure domains", providerConfigTableInput{ + expectedPlatformType: configv1.OpenStackPlatformType, + failureDomainsBuilder: nil, + providerSpecBuilder: machinev1beta1resourcebuilder.OpenStackProviderSpec(), + providerConfigMatcher: HaveField("OpenStack().Config()", *machinev1beta1resourcebuilder.OpenStackProviderSpec().Build()), + }), ) }) @@ -243,6 +256,66 @@ var _ = Describe("Provider Config", func() { matchPath: "GCP().Config().Zone", matchExpectation: "us-central1-b", }), + Entry("when keeping an OpenStack compute availability zone the same", injectFailureDomainTableInput{ + providerConfig: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az0").Build(), + }, + }, + failureDomain: failuredomain.NewOpenStackFailureDomain( + machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az0").Build(), + ), + matchPath: "OpenStack().Config().AvailabilityZone", + matchExpectation: "nova-az0", + }), + Entry("when keeping an OpenStack volume availability zone the same", injectFailureDomainTableInput{ + providerConfig: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithRootVolume(&machinev1alpha1.RootVolume{ + Zone: "cinder-az1", + }).Build(), + }, + }, + failureDomain: failuredomain.NewOpenStackFailureDomain( + machinev1resourcebuilder.OpenStackFailureDomain().WithRootVolume(machinev1.RootVolume{ + AvailabilityZone: "cinder-az1", + }).Build(), + ), + matchPath: "OpenStack().Config().RootVolume.Zone", + matchExpectation: "cinder-az1", + }), + Entry("when changing an OpenStack compute availability zone", injectFailureDomainTableInput{ + providerConfig: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az0").Build(), + }, + }, + failureDomain: failuredomain.NewOpenStackFailureDomain( + machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az1").Build(), + ), + matchPath: "OpenStack().Config().AvailabilityZone", + matchExpectation: "nova-az1", + }), + Entry("when changing an OpenStack volume availability zone", injectFailureDomainTableInput{ + providerConfig: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithRootVolume(&machinev1alpha1.RootVolume{ + Zone: "cinder-az0", + }).Build(), + }, + }, + failureDomain: failuredomain.NewOpenStackFailureDomain( + machinev1resourcebuilder.OpenStackFailureDomain().WithRootVolume(machinev1.RootVolume{ + AvailabilityZone: "cinder-az1", + }).Build(), + ), + matchPath: "OpenStack().Config().RootVolume.Zone", + matchExpectation: "cinder-az1", + }), ) }) @@ -300,6 +373,11 @@ var _ = Describe("Provider Config", func() { providerSpecBuilder: machinev1beta1resourcebuilder.GCPProviderSpec(), providerConfigMatcher: HaveField("GCP().Config()", *machinev1beta1resourcebuilder.GCPProviderSpec().Build()), }), + Entry("with an OpenStack config with failure domains", providerConfigTableInput{ + expectedPlatformType: configv1.OpenStackPlatformType, + providerSpecBuilder: machinev1beta1resourcebuilder.OpenStackProviderSpec(), + providerConfigMatcher: HaveField("OpenStack().Config()", *machinev1beta1resourcebuilder.OpenStackProviderSpec().Build()), + }), ) }) @@ -388,6 +466,10 @@ var _ = Describe("Provider Config", func() { }}, } + rootVolume := &machinev1alpha1.RootVolume{ + Zone: "cinder-az2", + } + DescribeTable("should correctly extract the failure domain", func(in extractFailureDomainTableInput) { fd := in.providerConfig.ExtractFailureDomain() @@ -437,6 +519,19 @@ var _ = Describe("Provider Config", func() { machinev1resourcebuilder.GCPFailureDomain().WithZone("us-central1-a").Build(), ), }), + Entry("with an OpenStack az2 failure domain", extractFailureDomainTableInput{ + providerConfig: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithRootVolume(rootVolume).WithZone("nova-az2").Build(), + }, + }, + expectedFailureDomain: failuredomain.NewOpenStackFailureDomain( + machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az2").WithRootVolume(machinev1.RootVolume{ + AvailabilityZone: "cinder-az2", + }).Build(), + ), + }), Entry("with a VSphere dummy failure domain", extractFailureDomainTableInput{ providerConfig: &providerConfig{ platformType: configv1.VSpherePlatformType, @@ -457,6 +552,10 @@ var _ = Describe("Provider Config", func() { expectedError error } + rootVolume := &machinev1alpha1.RootVolume{ + Zone: "cinder-az0", + } + DescribeTable("should compare provider configs", func(in equalTableInput) { equal, err := in.basePC.Equal(in.comparePC) @@ -575,6 +674,36 @@ var _ = Describe("Provider Config", func() { }, expectedEqual: false, }), + Entry("with matching OpenStack configs", equalTableInput{ + basePC: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az0").WithRootVolume(rootVolume).Build(), + }, + }, + comparePC: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az0").WithRootVolume(rootVolume).Build(), + }, + }, + expectedEqual: true, + }), + Entry("with mis-matched OpenStack configs", equalTableInput{ + basePC: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az0").WithRootVolume(rootVolume).Build(), + }, + }, + comparePC: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().WithZone("nova-az1").WithRootVolume(rootVolume).Build(), + }, + }, + expectedEqual: false, + }), Entry("with matching Generic configs", equalTableInput{ basePC: &providerConfig{ platformType: configv1.VSpherePlatformType, @@ -669,6 +798,15 @@ var _ = Describe("Provider Config", func() { }, expectedOut: machinev1beta1resourcebuilder.GCPProviderSpec().BuildRawExtension().Raw, }), + Entry("with an OpenStack config", rawConfigTableInput{ + providerConfig: &providerConfig{ + platformType: configv1.OpenStackPlatformType, + openstack: OpenStackProviderConfig{ + providerConfig: *machinev1beta1resourcebuilder.OpenStackProviderSpec().Build(), + }, + }, + expectedOut: machinev1beta1resourcebuilder.OpenStackProviderSpec().BuildRawExtension().Raw, + }), Entry("with a VSphere config", rawConfigTableInput{ providerConfig: providerConfig{ platformType: configv1.VSpherePlatformType, diff --git a/pkg/webhooks/controlplanemachineset/webhooks.go b/pkg/webhooks/controlplanemachineset/webhooks.go index 0e8d75c68..4e486e413 100644 --- a/pkg/webhooks/controlplanemachineset/webhooks.go +++ b/pkg/webhooks/controlplanemachineset/webhooks.go @@ -279,6 +279,8 @@ func validateOpenShiftProviderConfig(logger logr.Logger, parentPath *field.Path, return validateOpenShiftAzureProviderConfig(providerSpecPath.Child("value"), providerConfig.Azure()) case configv1.GCPPlatformType: return validateOpenShiftGCPProviderConfig(providerSpecPath.Child("value"), providerConfig.GCP()) + case configv1.OpenStackPlatformType: + return validateOpenShiftOpenStackProviderConfig(providerSpecPath.Child("value"), providerConfig.OpenStack()) } return []error{} @@ -304,6 +306,12 @@ func validateOpenShiftGCPProviderConfig(parentPath *field.Path, providerConfig p return []error{} } +// validateOpenShiftOpenStackProviderConfig runs OpenStack specific checks on the provider config on the ControlPlaneMachineSet. +// This ensure that the ControlPlaneMachineSet can safely replace OpenStack control plane machines. +func validateOpenShiftOpenStackProviderConfig(parentPath *field.Path, providerConfig providerconfig.OpenStackProviderConfig) []error { + return []error{} +} + // fetchControlPlaneMachines returns all control plane machines in the cluster. func (r *ControlPlaneMachineSetWebhook) fetchControlPlaneMachines(ctx context.Context) ([]machinev1beta1.Machine, error) { machineList := machinev1beta1.MachineList{} diff --git a/pkg/webhooks/controlplanemachineset/webhooks_test.go b/pkg/webhooks/controlplanemachineset/webhooks_test.go index 44753d155..bf9a283bc 100644 --- a/pkg/webhooks/controlplanemachineset/webhooks_test.go +++ b/pkg/webhooks/controlplanemachineset/webhooks_test.go @@ -25,6 +25,7 @@ import ( configv1 "github.com/openshift/api/config/v1" machinev1 "github.com/openshift/api/machine/v1" + machinev1alpha1 "github.com/openshift/api/machine/v1alpha1" machinev1beta1 "github.com/openshift/api/machine/v1beta1" "github.com/openshift/cluster-api-actuator-pkg/testutils" "github.com/openshift/cluster-api-actuator-pkg/testutils/resourcebuilder" @@ -698,6 +699,99 @@ var _ = Describe("Webhooks", func() { Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) }) }) + + Context("on OpenStack", func() { + + var filterRootVolumeOne = machinev1.RootVolume{ + AvailabilityZone: "cinder-az1", + } + var filterRootVolumeTwo = machinev1.RootVolume{ + AvailabilityZone: "cinder-az2", + } + var filterRootVolumeThree = machinev1.RootVolume{ + AvailabilityZone: "cinder-az3", + } + var filterRootVolumeFour = machinev1.RootVolume{ + AvailabilityZone: "cinder-az4", + } + var zone1Builder = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az1").WithRootVolume(filterRootVolumeOne) + var zone2Builder = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az2").WithRootVolume(filterRootVolumeTwo) + var zone3Builder = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az3").WithRootVolume(filterRootVolumeThree) + var zone4Builder = machinev1resourcebuilder.OpenStackFailureDomain().WithComputeAvailabilityZone("nova-az4").WithRootVolume(filterRootVolumeFour) + + BeforeEach(func() { + providerSpec := machinev1beta1resourcebuilder.OpenStackProviderSpec() + machineTemplate = machinev1resourcebuilder.OpenShiftMachineV1Beta1Template().WithProviderSpecBuilder(providerSpec) + // Default CPMS builder should be valid, individual tests will override to make it invalid + builder = machinev1resourcebuilder.ControlPlaneMachineSet().WithNamespace(namespaceName).WithMachineTemplateBuilder(machineTemplate) + + machineBuilder := machinev1beta1resourcebuilder.Machine().WithNamespace(namespaceName) + + By("Creating a selection of Machines") + for _, az := range []string{"az1", "az2", "az3"} { + rootVolume := &machinev1alpha1.RootVolume{ + Zone: "cinder-" + az, + } + controlPlaneMachineBuilder := machineBuilder.WithGenerateName("control-plane-machine-").AsMaster().WithProviderSpecBuilder(providerSpec.WithZone("nova-" + az).WithRootVolume(rootVolume)) + + controlPlaneMachine := controlPlaneMachineBuilder.Build() + Expect(k8sClient.Create(ctx, controlPlaneMachine)).To(Succeed()) + } + }) + + It("with a valid failure domains spec", func() { + cpms := builder.WithMachineTemplateBuilder(machineTemplate.WithFailureDomainsBuilder( + machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + zone1Builder, + zone2Builder, + zone3Builder, + ), + )).Build() + + Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) + }) + + It("with a mismatched failure domains spec", func() { + cpms := builder.WithMachineTemplateBuilder(machineTemplate.WithFailureDomainsBuilder( + machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + zone1Builder, + zone2Builder, + zone4Builder, + ), + )).Build() + + Expect(k8sClient.Create(ctx, cpms)).To(MatchError( + ContainSubstring("spec.template.machines_v1beta1_machine_openshift_io.failureDomains: Forbidden: control plane machines are using unspecified failure domain(s) [OpenStackFailureDomain{AvailabilityZone:nova-az3, RootVolume:{AvailabilityZone:cinder-az3}}]"), + )) + }) + + It("when reducing the availability", func() { + cpms := builder.WithMachineTemplateBuilder(machineTemplate.WithFailureDomainsBuilder( + machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + zone1Builder, + zone2Builder, + ), + )).Build() + + Expect(k8sClient.Create(ctx, cpms)).To(MatchError( + ContainSubstring("spec.template.machines_v1beta1_machine_openshift_io.failureDomains: Forbidden: control plane machines are using unspecified failure domain(s) [OpenStackFailureDomain{AvailabilityZone:nova-az3, RootVolume:{AvailabilityZone:cinder-az3}}]"), + )) + }) + + It("when increasing the availability", func() { + cpms := builder.WithMachineTemplateBuilder(machineTemplate.WithFailureDomainsBuilder( + machinev1resourcebuilder.OpenStackFailureDomains().WithFailureDomainBuilders( + zone1Builder, + zone2Builder, + zone3Builder, + zone4Builder, + ), + )).Build() + + Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) + }) + }) + }) Context("on update", func() { @@ -946,5 +1040,70 @@ var _ = Describe("Webhooks", func() { })()).Should(MatchError(ContainSubstring("ControlPlaneMachineSet.machine.openshift.io \"cluster\" is invalid: spec.selector: Invalid value: \"object\": selector is immutable")), "The selector should be immutable") }) }) + + Context("on OpenStack", func() { + BeforeEach(func() { + providerSpec := machinev1beta1resourcebuilder.OpenStackProviderSpec() + machineTemplate := machinev1resourcebuilder.OpenShiftMachineV1Beta1Template().WithProviderSpecBuilder(providerSpec) + // Default CPMS builder should be valid + cpms = machinev1resourcebuilder.ControlPlaneMachineSet().WithNamespace(namespaceName).WithMachineTemplateBuilder(machineTemplate).Build() + + machineBuilder := machinev1beta1resourcebuilder.Machine().WithNamespace(namespaceName) + controlPlaneMachineBuilder := machineBuilder.WithGenerateName("control-plane-machine-").AsMaster().WithProviderSpecBuilder(providerSpec) + By("Creating a selection of Machines") + for i := 0; i < 3; i++ { + controlPlaneMachine := controlPlaneMachineBuilder.Build() + Expect(k8sClient.Create(ctx, controlPlaneMachine)).To(Succeed()) + } + + By("Creating a valid ControlPlaneMachineSet") + Expect(k8sClient.Create(ctx, cpms)).To(Succeed()) + }) + + It("with 4 replicas", func() { + // This is an openapi validation but it makes sense to include it here as well + Expect(komega.Update(cpms, func() { + four := int32(4) + cpms.Spec.Replicas = &four + })()).Should(MatchError(ContainSubstring("Unsupported value: 4: supported values: \"3\", \"5\""))) + }) + + It("with 5 replicas", func() { + // Five replicas is a valid value but the existing CPMS has three replicas + Expect(komega.Update(cpms, func() { + five := int32(5) + cpms.Spec.Replicas = &five + })()).Should(MatchError(ContainSubstring("ControlPlaneMachineSet.machine.openshift.io \"cluster\" is invalid: spec.replicas: Invalid value: \"integer\": replicas is immutable")), "Replicas should be immutable") + }) + + It("when modifying the machine labels and the selector still matches", func() { + Expect(komega.Update(cpms, func() { + cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.ObjectMeta.Labels["new"] = dummyValue + })()).Should(Succeed(), "Machine label updates are allowed provided the selector still matches") + }) + + It("when modifying the machine labels so that the selector no longer matches", func() { + Expect(komega.Update(cpms, func() { + cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.ObjectMeta.Labels = map[string]string{ + "different": "labels", + machinev1beta1.MachineClusterIDLabel: "cpms-cluster-test-id-different", + openshiftMachineRoleLabel: masterMachineRole, + openshiftMachineTypeLabel: masterMachineRole, + } + })()).Should(MatchError(ContainSubstring("selector does not match template labels")), "The selector must always match the machine labels") + }) + + It("when modifying the machine labels to remove the cluster ID label", func() { + Expect(komega.Update(cpms, func() { + delete(cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.ObjectMeta.Labels, machinev1beta1.MachineClusterIDLabel) + })()).Should(MatchError(ContainSubstring("ControlPlaneMachineSet.machine.openshift.io \"cluster\" is invalid: spec.template.machines_v1beta1_machine_openshift_io.metadata.labels: Invalid value: \"object\": label 'machine.openshift.io/cluster-api-cluster' is required")), "The labels must always contain a cluster ID label") + }) + + It("when mutating the selector", func() { + Expect(komega.Update(cpms, func() { + cpms.Spec.Selector.MatchLabels["new"] = dummyValue + })()).Should(MatchError(ContainSubstring("ControlPlaneMachineSet.machine.openshift.io \"cluster\" is invalid: spec.selector: Invalid value: \"object\": selector is immutable")), "The selector should be immutable") + }) + }) }) }) diff --git a/test/e2e/framework/framework.go b/test/e2e/framework/framework.go index 19c945a3a..189120097 100644 --- a/test/e2e/framework/framework.go +++ b/test/e2e/framework/framework.go @@ -21,10 +21,12 @@ import ( "encoding/json" "errors" "fmt" + "os" "regexp" "strconv" "github.com/go-logr/logr" + "github.com/google/uuid" configv1 "github.com/openshift/api/config/v1" machinev1 "github.com/openshift/api/machine/v1" machinev1beta1 "github.com/openshift/api/machine/v1beta1" @@ -55,6 +57,9 @@ var ( // This means that even though the format is correct, we haven't implemented the logic to increase // this instance size. errInstanceTypeNotSupported = errors.New("instance type is not supported") + + // errMissingInstanceSize is returned when the instance size is missing. + errMissingInstanceSize = errors.New("instance size is missing") ) // Framework is an interface for getting clients and information @@ -88,6 +93,9 @@ type Framework interface { // managed by the control plane machine set. IncreaseProviderSpecInstanceSize(providerSpec *runtime.RawExtension) error + // TagInstanceInProviderSpec tags the instance in the provider spec. + TagInstanceInProviderSpec(providerSpec *runtime.RawExtension) error + // ConvertToControlPlaneMachineSetProviderSpec converts a control plane machine provider spec // to a control plane machine set suitable provider spec. ConvertToControlPlaneMachineSetProviderSpec(providerSpec machinev1beta1.ProviderSpec) (*runtime.RawExtension, error) @@ -232,6 +240,27 @@ func (f *framework) IncreaseProviderSpecInstanceSize(rawProviderSpec *runtime.Ra return increaseGCPInstanceSize(rawProviderSpec, providerConfig) case configv1.NutanixPlatformType: return increaseNutanixInstanceSize(rawProviderSpec, providerConfig) + case configv1.OpenStackPlatformType: + return increaseOpenStackInstanceSize(rawProviderSpec, providerConfig) + default: + return fmt.Errorf("%w: %s", errUnsupportedPlatform, f.platform) + } +} + +// TagInstanceInProviderSpec tags the instance in the providerSpec. +func (f *framework) TagInstanceInProviderSpec(rawProviderSpec *runtime.RawExtension) error { + providerConfig, err := providerconfig.NewProviderConfigFromMachineSpec(f.logger, machinev1beta1.MachineSpec{ + ProviderSpec: machinev1beta1.ProviderSpec{ + Value: rawProviderSpec, + }, + }) + if err != nil { + return fmt.Errorf("failed to get provider config: %w", err) + } + + switch f.platform { + case configv1.OpenStackPlatformType: + return tagOpenStackProviderSpec(rawProviderSpec, providerConfig) default: return fmt.Errorf("%w: %s", errUnsupportedPlatform, f.platform) } @@ -342,6 +371,8 @@ func (f *framework) ConvertToControlPlaneMachineSetProviderSpec(providerSpec mac return convertGCPProviderConfigToControlPlaneMachineSetProviderSpec(providerConfig) case configv1.NutanixPlatformType: return convertNutanixProviderConfigToControlPlaneMachineSetProviderSpec(providerConfig) + case configv1.OpenStackPlatformType: + return convertOpenStackProviderConfigToControlPlaneMachineSetProviderSpec(providerConfig) default: return nil, fmt.Errorf("%w: %s", errUnsupportedPlatform, f.platform) } @@ -411,6 +442,27 @@ func convertNutanixProviderConfigToControlPlaneMachineSetProviderSpec(providerCo }, nil } +// convertOpenStackProviderConfigToControlPlaneMachineSetProviderSpec converts an OpenStack providerConfig into a +// raw control plane machine set provider spec. +func convertOpenStackProviderConfigToControlPlaneMachineSetProviderSpec(providerConfig providerconfig.ProviderConfig) (*runtime.RawExtension, error) { + openStackPs := providerConfig.OpenStack().Config() + + openStackPs.AvailabilityZone = "" + + if openStackPs.RootVolume != nil { + openStackPs.RootVolume.Zone = "" + } + + rawBytes, err := json.Marshal(openStackPs) + if err != nil { + return nil, fmt.Errorf("error marshalling openstack providerSpec: %w", err) + } + + return &runtime.RawExtension{ + Raw: rawBytes, + }, nil +} + // loadClient returns a new controller-runtime client. func loadClient(sch *runtime.Scheme) (runtimeclient.Client, error) { cfg, err := config.GetConfig() @@ -475,6 +527,8 @@ func getPlatformSupportLevel(k8sClient runtimeclient.Client) (PlatformSupportLev return Manual, platformType, nil case configv1.NutanixPlatformType: return Manual, platformType, nil + case configv1.OpenStackPlatformType: + return Manual, platformType, nil default: return Unsupported, platformType, nil } @@ -555,6 +609,20 @@ func increaseAzureInstanceSize(rawProviderSpec *runtime.RawExtension, providerCo return nil } +// tagOpenStackProviderSpec adds a tag to the providerSpec for an OpenStack providerSpec. +func tagOpenStackProviderSpec(rawProviderSpec *runtime.RawExtension, providerConfig providerconfig.ProviderConfig) error { + cfg := providerConfig.OpenStack().Config() + + randomTag := uuid.New().String() + cfg.Tags = append(cfg.Tags, fmt.Sprintf("cpms-test-tag-%s", randomTag)) + + if err := setProviderSpecValue(rawProviderSpec, cfg); err != nil { + return fmt.Errorf("failed to set provider spec value: %w", err) + } + + return nil +} + // nextAzureVMSize returns the next Azure VM size in the series. // In Azure terms this normally means doubling the size of the underlying instance. // This should mean we do not need to update this when the installer changes the default instance size. @@ -621,6 +689,23 @@ func increaseNutanixInstanceSize(rawProviderSpec *runtime.RawExtension, provider return nil } +// increase OpenStackInstanceSize increases the instance size of the instance on the providerSpec for an OpenStack providerSpec. +func increaseOpenStackInstanceSize(rawProviderSpec *runtime.RawExtension, providerConfig providerconfig.ProviderConfig) error { + cfg := providerConfig.OpenStack().Config() + + if os.Getenv("OPENSTACK_CONTROLPLANE_FLAVOR_ALTERNATE") == "" { + return fmt.Errorf("OPENSTACK_CONTROLPLANE_FLAVOR_ALTERNATE environment variable not set: %w", errMissingInstanceSize) + } else { + cfg.Flavor = os.Getenv("OPENSTACK_CONTROLPLANE_FLAVOR_ALTERNATE") + } + + if err := setProviderSpecValue(rawProviderSpec, cfg); err != nil { + return fmt.Errorf("failed to set provider spec value: %w", err) + } + + return nil +} + // nextGCPVMSize returns the next GCP machine size in the series. // The Machine sizes being used are in format -standard-. func nextGCPMachineSize(current string) (string, error) { diff --git a/test/e2e/helpers/controlplanemachineset.go b/test/e2e/helpers/controlplanemachineset.go index 5647832e2..8d58c33a4 100644 --- a/test/e2e/helpers/controlplanemachineset.go +++ b/test/e2e/helpers/controlplanemachineset.go @@ -20,10 +20,12 @@ import ( "context" "errors" "fmt" + "os" . "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" + configv1 "github.com/openshift/api/config/v1" machinev1 "github.com/openshift/api/machine/v1" machinev1beta1 "github.com/openshift/api/machine/v1beta1" "github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/framework" @@ -303,9 +305,22 @@ func IncreaseControlPlaneMachineSetInstanceSize(testFramework framework.Framewor Eventually(komega.Get(cpms), gomegaArgs...).Should(Succeed(), "control plane machine set should exist") originalProviderSpec := cpms.Spec.Template.OpenShiftMachineV1Beta1Machine.Spec.ProviderSpec - updatedProviderSpec := originalProviderSpec.DeepCopy() - Expect(testFramework.IncreaseProviderSpecInstanceSize(updatedProviderSpec.Value)).To(Succeed(), "provider spec should be updated with bigger instance size") + + platformType := testFramework.GetPlatformType() + + switch platformType { + case configv1.OpenStackPlatformType: + // OpenStack flavors are not predictable. So if OPENSTACK_CONTROLPLANE_FLAVOR_ALTERNATE is set in the environment, we'll use it + // to change the instance flavor, otherwise we just tag the instance with a new tag, which will trigger the redeployment. + if os.Getenv("OPENSTACK_CONTROLPLANE_FLAVOR_ALTERNATE") == "" { + Expect(testFramework.TagInstanceInProviderSpec(updatedProviderSpec.Value)).To(Succeed(), "provider spec should be updated with a new tag") + } else { + Expect(testFramework.IncreaseProviderSpecInstanceSize(updatedProviderSpec.Value)).To(Succeed(), "provider spec should be updated with bigger instance size") + } + default: + Expect(testFramework.IncreaseProviderSpecInstanceSize(updatedProviderSpec.Value)).To(Succeed(), "provider spec should be updated with bigger instance size") + } By("Increasing the control plane machine set instance size") diff --git a/test/e2e/helpers/machine.go b/test/e2e/helpers/machine.go index e1aa7384f..df74a399e 100644 --- a/test/e2e/helpers/machine.go +++ b/test/e2e/helpers/machine.go @@ -20,6 +20,7 @@ import ( "context" "errors" "fmt" + "os" "regexp" "sort" "strconv" @@ -29,6 +30,7 @@ import ( . "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" + configv1 "github.com/openshift/api/config/v1" machinev1beta1 "github.com/openshift/api/machine/v1beta1" "github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/framework" @@ -404,7 +406,20 @@ func IncreaseControlPlaneMachineInstanceSize(testFramework framework.Framework, originalProviderSpec := machine.Spec.ProviderSpec updatedProviderSpec := originalProviderSpec.DeepCopy() - Expect(testFramework.IncreaseProviderSpecInstanceSize(updatedProviderSpec.Value)).To(Succeed(), "provider spec should be updated with bigger instance size") + platformType := testFramework.GetPlatformType() + + switch platformType { + case configv1.OpenStackPlatformType: + // OpenStack flavors are not predictable. So if OPENSTACK_CONTROLPLANE_FLAVOR_ALTERNATE is set in the environment, we'll use it + // to change the instance flavor, otherwise we just tag the instance with a new tag, which will trigger the redeployment. + if os.Getenv("OPENSTACK_CONTROLPLANE_FLAVOR_ALTERNATE") == "" { + Expect(testFramework.TagInstanceInProviderSpec(updatedProviderSpec.Value)).To(Succeed(), "provider spec should be updated with a new tag") + } else { + Expect(testFramework.IncreaseProviderSpecInstanceSize(updatedProviderSpec.Value)).To(Succeed(), "provider spec should be updated with bigger instance size") + } + default: + Expect(testFramework.IncreaseProviderSpecInstanceSize(updatedProviderSpec.Value)).To(Succeed(), "provider spec should be updated with bigger instance size") + } By(fmt.Sprintf("Updating the provider spec of the control plane machine at index %d", index)) diff --git a/test/e2e/presubmit_test.go b/test/e2e/presubmit_test.go index 07566e49b..040f5a282 100644 --- a/test/e2e/presubmit_test.go +++ b/test/e2e/presubmit_test.go @@ -21,6 +21,7 @@ import ( . "github.com/onsi/ginkgo/v2" + configv1 "github.com/openshift/api/config/v1" machinev1 "github.com/openshift/api/machine/v1" machinev1beta1 "github.com/openshift/api/machine/v1beta1" @@ -106,6 +107,12 @@ var _ = Describe("ControlPlaneMachineSet Operator", framework.PreSubmit(), func( Context("and a defaulted value is deleted from the ControlPlaneMachineSet", func() { var originalProviderSpec machinev1beta1.ProviderSpec BeforeEach(func() { + // There is no defaulting webhook for the machines running on the following platforms. + switch testFramework.GetPlatformType() { + case configv1.OpenStackPlatformType: + Skip("Skipping test on OpenStack platform") + } + _ = helpers.EnsureControlPlaneMachineSetUpdateStrategy(testFramework, machinev1.RollingUpdate) originalProviderSpec = helpers.UpdateDefaultedValueFromControlPlaneMachineSetProviderConfig(testFramework) }) diff --git a/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/awsdnsspec.go b/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/awsdnsspec.go new file mode 100644 index 000000000..4f7ce43d1 --- /dev/null +++ b/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/awsdnsspec.go @@ -0,0 +1,23 @@ +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1 + +// AWSDNSSpecApplyConfiguration represents an declarative configuration of the AWSDNSSpec type for use +// with apply. +type AWSDNSSpecApplyConfiguration struct { + PrivateZoneIAMRole *string `json:"privateZoneIAMRole,omitempty"` +} + +// AWSDNSSpecApplyConfiguration constructs an declarative configuration of the AWSDNSSpec type for use with +// apply. +func AWSDNSSpec() *AWSDNSSpecApplyConfiguration { + return &AWSDNSSpecApplyConfiguration{} +} + +// WithPrivateZoneIAMRole sets the PrivateZoneIAMRole field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the PrivateZoneIAMRole field is set to the value of the last call. +func (b *AWSDNSSpecApplyConfiguration) WithPrivateZoneIAMRole(value string) *AWSDNSSpecApplyConfiguration { + b.PrivateZoneIAMRole = &value + return b +} diff --git a/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/dnsplatformspec.go b/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/dnsplatformspec.go new file mode 100644 index 000000000..8f43c8c5f --- /dev/null +++ b/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/dnsplatformspec.go @@ -0,0 +1,36 @@ +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1 + +import ( + v1 "github.com/openshift/api/config/v1" +) + +// DNSPlatformSpecApplyConfiguration represents an declarative configuration of the DNSPlatformSpec type for use +// with apply. +type DNSPlatformSpecApplyConfiguration struct { + Type *v1.PlatformType `json:"type,omitempty"` + AWS *AWSDNSSpecApplyConfiguration `json:"aws,omitempty"` +} + +// DNSPlatformSpecApplyConfiguration constructs an declarative configuration of the DNSPlatformSpec type for use with +// apply. +func DNSPlatformSpec() *DNSPlatformSpecApplyConfiguration { + return &DNSPlatformSpecApplyConfiguration{} +} + +// WithType sets the Type field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Type field is set to the value of the last call. +func (b *DNSPlatformSpecApplyConfiguration) WithType(value v1.PlatformType) *DNSPlatformSpecApplyConfiguration { + b.Type = &value + return b +} + +// WithAWS sets the AWS field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the AWS field is set to the value of the last call. +func (b *DNSPlatformSpecApplyConfiguration) WithAWS(value *AWSDNSSpecApplyConfiguration) *DNSPlatformSpecApplyConfiguration { + b.AWS = value + return b +} diff --git a/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/dnsspec.go b/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/dnsspec.go index cfa268744..b534ef943 100644 --- a/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/dnsspec.go +++ b/vendor/github.com/openshift/client-go/config/applyconfigurations/config/v1/dnsspec.go @@ -5,9 +5,10 @@ package v1 // DNSSpecApplyConfiguration represents an declarative configuration of the DNSSpec type for use // with apply. type DNSSpecApplyConfiguration struct { - BaseDomain *string `json:"baseDomain,omitempty"` - PublicZone *DNSZoneApplyConfiguration `json:"publicZone,omitempty"` - PrivateZone *DNSZoneApplyConfiguration `json:"privateZone,omitempty"` + BaseDomain *string `json:"baseDomain,omitempty"` + PublicZone *DNSZoneApplyConfiguration `json:"publicZone,omitempty"` + PrivateZone *DNSZoneApplyConfiguration `json:"privateZone,omitempty"` + Platform *DNSPlatformSpecApplyConfiguration `json:"platform,omitempty"` } // DNSSpecApplyConfiguration constructs an declarative configuration of the DNSSpec type for use with @@ -39,3 +40,11 @@ func (b *DNSSpecApplyConfiguration) WithPrivateZone(value *DNSZoneApplyConfigura b.PrivateZone = value return b } + +// WithPlatform sets the Platform field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Platform field is set to the value of the last call. +func (b *DNSSpecApplyConfiguration) WithPlatform(value *DNSPlatformSpecApplyConfiguration) *DNSSpecApplyConfiguration { + b.Platform = value + return b +} diff --git a/vendor/github.com/openshift/client-go/config/applyconfigurations/internal/internal.go b/vendor/github.com/openshift/client-go/config/applyconfigurations/internal/internal.go index 6aa765f55..8ec86a2ab 100644 --- a/vendor/github.com/openshift/client-go/config/applyconfigurations/internal/internal.go +++ b/vendor/github.com/openshift/client-go/config/applyconfigurations/internal/internal.go @@ -112,6 +112,13 @@ var schemaYAML = typed.YAMLObject(`types: elementType: namedType: __untyped_deduced_ elementRelationship: separable +- name: com.github.openshift.api.config.v1.AWSDNSSpec + map: + fields: + - name: privateZoneIAMRole + type: + scalar: string + default: "" - name: com.github.openshift.api.config.v1.AWSIngressSpec map: fields: @@ -932,6 +939,21 @@ var schemaYAML = typed.YAMLObject(`types: type: namedType: com.github.openshift.api.config.v1.DNSStatus default: {} +- name: com.github.openshift.api.config.v1.DNSPlatformSpec + map: + fields: + - name: aws + type: + namedType: com.github.openshift.api.config.v1.AWSDNSSpec + - name: type + type: + scalar: string + default: "" + unions: + - discriminator: type + fields: + - fieldName: aws + discriminatorValue: AWS - name: com.github.openshift.api.config.v1.DNSSpec map: fields: @@ -939,6 +961,10 @@ var schemaYAML = typed.YAMLObject(`types: type: scalar: string default: "" + - name: platform + type: + namedType: com.github.openshift.api.config.v1.DNSPlatformSpec + default: {} - name: privateZone type: namedType: com.github.openshift.api.config.v1.DNSZone diff --git a/vendor/github.com/openshift/client-go/machine/applyconfigurations/internal/internal.go b/vendor/github.com/openshift/client-go/machine/applyconfigurations/internal/internal.go index ee7813534..4cd121e45 100644 --- a/vendor/github.com/openshift/client-go/machine/applyconfigurations/internal/internal.go +++ b/vendor/github.com/openshift/client-go/machine/applyconfigurations/internal/internal.go @@ -212,6 +212,12 @@ var schemaYAML = typed.YAMLObject(`types: elementType: namedType: com.github.openshift.api.machine.v1.GCPFailureDomain elementRelationship: atomic + - name: openstack + type: + list: + elementType: + namedType: com.github.openshift.api.machine.v1.OpenStackFailureDomain + elementRelationship: atomic - name: platform type: scalar: string @@ -225,6 +231,8 @@ var schemaYAML = typed.YAMLObject(`types: discriminatorValue: Azure - fieldName: gcp discriminatorValue: GCP + - fieldName: openstack + discriminatorValue: OpenStack - name: com.github.openshift.api.machine.v1.GCPFailureDomain map: fields: @@ -247,6 +255,21 @@ var schemaYAML = typed.YAMLObject(`types: type: namedType: com.github.openshift.api.machine.v1beta1.MachineSpec default: {} +- name: com.github.openshift.api.machine.v1.OpenStackFailureDomain + map: + fields: + - name: availabilityZone + type: + scalar: string + - name: rootVolume + type: + namedType: com.github.openshift.api.machine.v1.RootVolume +- name: com.github.openshift.api.machine.v1.RootVolume + map: + fields: + - name: availabilityZone + type: + scalar: string - name: com.github.openshift.api.machine.v1beta1.Condition map: fields: diff --git a/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/failuredomains.go b/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/failuredomains.go index a24c74451..0fecd931f 100644 --- a/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/failuredomains.go +++ b/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/failuredomains.go @@ -9,10 +9,11 @@ import ( // FailureDomainsApplyConfiguration represents an declarative configuration of the FailureDomains type for use // with apply. type FailureDomainsApplyConfiguration struct { - Platform *v1.PlatformType `json:"platform,omitempty"` - AWS *[]AWSFailureDomainApplyConfiguration `json:"aws,omitempty"` - Azure *[]AzureFailureDomainApplyConfiguration `json:"azure,omitempty"` - GCP *[]GCPFailureDomainApplyConfiguration `json:"gcp,omitempty"` + Platform *v1.PlatformType `json:"platform,omitempty"` + AWS *[]AWSFailureDomainApplyConfiguration `json:"aws,omitempty"` + Azure *[]AzureFailureDomainApplyConfiguration `json:"azure,omitempty"` + GCP *[]GCPFailureDomainApplyConfiguration `json:"gcp,omitempty"` + OpenStack []OpenStackFailureDomainApplyConfiguration `json:"openstack,omitempty"` } // FailureDomainsApplyConfiguration constructs an declarative configuration of the FailureDomains type for use with @@ -88,3 +89,16 @@ func (b *FailureDomainsApplyConfiguration) WithGCP(values ...*GCPFailureDomainAp } return b } + +// WithOpenStack adds the given value to the OpenStack field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the OpenStack field. +func (b *FailureDomainsApplyConfiguration) WithOpenStack(values ...*OpenStackFailureDomainApplyConfiguration) *FailureDomainsApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithOpenStack") + } + b.OpenStack = append(b.OpenStack, *values[i]) + } + return b +} diff --git a/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/openstackfailuredomain.go b/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/openstackfailuredomain.go index 3b13d3d54..cbee21bdf 100644 --- a/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/openstackfailuredomain.go +++ b/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/openstackfailuredomain.go @@ -5,7 +5,8 @@ package v1 // OpenStackFailureDomainApplyConfiguration represents an declarative configuration of the OpenStackFailureDomain type for use // with apply. type OpenStackFailureDomainApplyConfiguration struct { - AvailabilityZone *string `json:"availabilityZone,omitempty"` + AvailabilityZone *string `json:"availabilityZone,omitempty"` + RootVolume *RootVolumeApplyConfiguration `json:"rootVolume,omitempty"` } // OpenStackFailureDomainApplyConfiguration constructs an declarative configuration of the OpenStackFailureDomain type for use with @@ -21,3 +22,11 @@ func (b *OpenStackFailureDomainApplyConfiguration) WithAvailabilityZone(value st b.AvailabilityZone = &value return b } + +// WithRootVolume sets the RootVolume field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the RootVolume field is set to the value of the last call. +func (b *OpenStackFailureDomainApplyConfiguration) WithRootVolume(value *RootVolumeApplyConfiguration) *OpenStackFailureDomainApplyConfiguration { + b.RootVolume = value + return b +} diff --git a/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/rootvolume.go b/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/rootvolume.go new file mode 100644 index 000000000..e61ae38bd --- /dev/null +++ b/vendor/github.com/openshift/client-go/machine/applyconfigurations/machine/v1/rootvolume.go @@ -0,0 +1,23 @@ +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1 + +// RootVolumeApplyConfiguration represents an declarative configuration of the RootVolume type for use +// with apply. +type RootVolumeApplyConfiguration struct { + AvailabilityZone *string `json:"availabilityZone,omitempty"` +} + +// RootVolumeApplyConfiguration constructs an declarative configuration of the RootVolume type for use with +// apply. +func RootVolume() *RootVolumeApplyConfiguration { + return &RootVolumeApplyConfiguration{} +} + +// WithAvailabilityZone sets the AvailabilityZone field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the AvailabilityZone field is set to the value of the last call. +func (b *RootVolumeApplyConfiguration) WithAvailabilityZone(value string) *RootVolumeApplyConfiguration { + b.AvailabilityZone = &value + return b +} diff --git a/vendor/modules.txt b/vendor/modules.txt index b727c764a..a000ce136 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -533,7 +533,7 @@ github.com/openshift/api/config/v1alpha1 github.com/openshift/api/machine/v1 github.com/openshift/api/machine/v1alpha1 github.com/openshift/api/machine/v1beta1 -# github.com/openshift/client-go v0.0.0-20230503144108-75015d2347cb +# github.com/openshift/client-go v0.0.0-20230607134213-3cd0021bbee3 ## explicit; go 1.20 github.com/openshift/client-go/config/applyconfigurations/config/v1 github.com/openshift/client-go/config/applyconfigurations/internal