-
Notifications
You must be signed in to change notification settings - Fork 438
Kubevirt platform #779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubevirt platform #779
Conversation
|
✔️ Deploy Preview for hypershift-docs ready! 🔨 Explore the source changes: 0da4c20 🔍 Inspect the deploy log: https://app.netlify.com/sites/hypershift-docs/deploys/61c46558b817400008257103 😎 Browse the preview: https://deploy-preview-779--hypershift-docs.netlify.app/reference/api |
| return ctrl.Result{}, nil | ||
| } | ||
|
|
||
| func (r NodePoolReconciler) reconcileKubevirtMachineTemplate(ctx context.Context, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's put this into its own kubevirt.go file in this package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved
Do you think have to keep this function under NodePoolReconciler struct?
| } | ||
| } | ||
|
|
||
| func KubevirtMachineTemplate(infraName string, hostedCluster *hyperv1.HostedCluster, nodePool *hyperv1.NodePool, controlPlaneNamespace string) (*capikubevirt.KubevirtMachineTemplate, string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's put this into its own kubevirt.go file in this package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| hcluster *hyperv1.HostedCluster, | ||
| controlPlaneNamespace string) error | ||
|
|
||
| AppendPlatformSpecificPolicyRules(rules []rbacv1.PolicyRule) []rbacv1.PolicyRule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, let's add a comment providing context on how this func should be implemented in the line of the ones for the other interface func.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we want to start with just allowing to return additional rules? In theory the current signature allows more dynamic introspection and addition, but not sure if it is needed, and people can just return nil instead of having to return what was passed in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it, please see 22e8c33
|
|
||
| func (p Kubevirt) CAPIProviderDeploymentSpec(hcluster *hyperv1.HostedCluster, tokenMinterImage string) (*appsv1.DeploymentSpec, error) { | ||
| defaultMode := int32(420) | ||
| capkLabels := map[string]string{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's drop this lables as we don't really need this labels, the generic func sets them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
go.mod
Outdated
| sigs.k8s.io/cluster-api => sigs.k8s.io/cluster-api v1.0.0 | ||
| ) | ||
|
|
||
| replace sigs.k8s.io/cluster-api-provider-kubevirt => github.com/nirarg/cluster-api-provider-kubevirt-1 v0.0.0-20211212090220-ea93f714afd5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this coming from your personal fork?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the capk repository is still in work in progress, not all changes are merged
Once all the changes from my local branch will get merged to openshift, will change it to use it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to use the fork under openshift
| const ( | ||
| hostedClusterAnnotation = "hypershift.openshift.io/cluster" | ||
| // TODO nargaman replace with pre-built valid image | ||
| imageCAPK = "registry:5000/capk:devel" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:face_with_raised_eyebrow:
Please let's update this to the proper image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is PR to add capk to openshift-ci, this would add the ability to generate the proper image
openshift/release#24575
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the image to one created from the fork in openshift org:
registry.ci.openshift.org/hypershift/cluster-api-kubevirt-controller:1.0.0
|
Thanks @nirarg, this is looking great! dropped some comments mostly cosmetic. |
rmohr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a first look two minor comments.
cmd/cluster/none/create.go
Outdated
| return nil | ||
| } | ||
|
|
||
| func SetCommon(exampleOptions *apifixtures.ExampleOptions, opts *core.CreateOptions) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my taste SetCommon does not look right. It is too unspecific and small to justify code-reuse and makes it harder to get which options are defaulted. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed it
| hcluster *hyperv1.HostedCluster, | ||
| controlPlaneNamespace string) error | ||
|
|
||
| AppendPlatformSpecificPolicyRules(rules []rbacv1.PolicyRule) []rbacv1.PolicyRule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we want to start with just allowing to return additional rules? In theory the current signature allows more dynamic introspection and addition, but not sure if it is needed, and people can just return nil instead of having to return what was passed in.
davidvossel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we're waiting for some of the capk work to land, perhaps we can sort through how to automatically detect the rhcos containerdisk as well as the direction we want to go for making tunings like cpu/mem configurable on the VMIs
| if platformType == hyperv1.NonePlatform || platformType == hyperv1.KubevirtPlatform { | ||
| cfg.Spec.Storage = imageregistryv1.ImageRegistryConfigStorage{EmptyDir: &imageregistryv1.ImageRegistryConfigStorageEmptyDir{}} | ||
| } else { | ||
| cfg.Spec.Storage.EmptyDir = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if we don't set this type to EmptyDir and leave EmptyDir == nil for kubevirt platform?
| Domain: kubevirtv1.DomainSpec{ | ||
| CPU: &kubevirtv1.CPU{Cores: 2}, | ||
| Memory: &kubevirtv1.Memory{Guest: &guestQuantity}, | ||
| Devices: kubevirtv1.Devices{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the plan here for configuration? I'd imagine we need to expand the NodePoolPlatform struct in api/v1alpha1/nodepool_types.go with a KubeVirt platform type of some sort.
Is that something we should tackle in this PR? Maybe it could be as simple as exposing resource request/limits field so cpu+mem can be configured.
// NodePoolPlatform specifies the underlying infrastructure provider for the
// NodePool and is used to configure platform specific behavior.
type NodePoolPlatform struct {
// Type specifies the platform name.
//
// +unionDiscriminator
// +immutable
Type PlatformType `json:"type"`
// AWS specifies the configuration used when operating on AWS.
//
// +optional
// +immutable
AWS *AWSNodePoolPlatform `json:"aws,omitempty"`
KubeVirt *KubeVirtNodePoolPlatform `json:"aws,omitempty"`
}
type KubeVirtNodePoolPlatform struct {
// Requests is a description of the initial vmi resources.
// Valid resource keys are "memory" and "cpu".
// +optional
Requests v1.ResourceList `json:"requests,omitempty"`
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after further discussion during our sync meeting today, we decided that we should embed the VMITemplate into the node pool for configuring the kubevirt platform. The rhcos volume/disk will be merged into the vmitemplate provided by the node pool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it - added VMITemplate to the KubevirtPlatformNodePool
| // TODO nargaman - replace with pre-defined image | ||
| Image: "quay.io/containerdisks/rhcos:4.9", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the aws provider, hypershift is attempting to automatically detect the correct AMI based on the release. If we are mirroring/publishing rhcos container disks to a predictable place that we control, maybe we follow the same pattern and search for images in the public quay.io repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently removed the container disk from the code and the user must provide this in CLI
When we will have more official image, will add it as default value
22e8c33 to
872ff11
Compare
872ff11 to
8f35dba
Compare
rmohr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nirarg looks really great. A few minor suggestions.
| Name: "credentials", | ||
| VolumeSource: corev1.VolumeSource{ | ||
| Secret: &corev1.SecretVolumeSource{ | ||
| SecretName: awsCredsSecretName, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there other AWS cases, like bare metal installations which make use of this secret too? If not, that change probably makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@enxebre can you please advise on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I can tell, PrivatePlatformType can only be none or aws right now. I think this should be fine the way you have it.
from cmd/install/install.go
func (o *Options) Validate() error {
switch hyperv1.PlatformType(o.PrivatePlatform) {
case hyperv1.AWSPlatform:
if o.AWSPrivateCreds == "" || o.AWSPrivateRegion == "" {
return fmt.Errorf("--aws-private-region and --aws-private-creds are required with --private-platform=%s", hyperv1.AWSPlatform)
}
case hyperv1.NonePlatform:
default:
return fmt.Errorf("--private-platform must be either %s or %s", hyperv1.AWSPlatform, hyperv1.NonePlatform)
}
return nil
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed this change, as it not related to this PR
api/fixtures/example.go
Outdated
| // Otherwise it must fail | ||
| // https://issues.redhat.com/plugins/servlet/mobile#issue/RFE-2501 | ||
| if o.Kubevirt.Image == "" { | ||
| return nil, errors.New("The image for the Kubevirt machine must be provided by user (\"--kubevirt-image\" flag") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we expect that the commandline part already ensured that this value is populated. In fact I think that you are doing this alreay. You can probably remove this check and simplify the function return signature again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its can't be validated in the command line, because there is option to create the cluster without nodepool and create the nodepool later, in this case this parameter can be empty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But can't you then outside already verify that if the creation is requested, that you are also requiring this to be filled?
For example at https://github.com/openshift/hypershift/pull/779/files#diff-a8a4a70fd459a31d15b6500359d7e0e3cd47c5f7a48f551071756ddc395eaa20R55 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to ensure that we keep the input validation together and don't spread it too much. The entrypoint above is supposed to see all persistent values filled already and should be able to handle your case based on the available info.
cmd/cluster/kubevirt/create.go
Outdated
| Image: "", | ||
| } | ||
|
|
||
| cmd.Flags().StringVar(&opts.KubevirtPlatform.Memory, "kubevirt-memory", opts.KubevirtPlatform.Memory, "In KubeVirt platform - the amount of memory which is visible inside the Guest OS (type BinarySI, e.g. 5Gi, 100Mi)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about node-memory or just memory? We are inside the kubevirt subcommand, so we do not in general need a kubevirt- prefix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to memory
cmd/cluster/kubevirt/create.go
Outdated
| } | ||
|
|
||
| cmd.Flags().StringVar(&opts.KubevirtPlatform.Memory, "kubevirt-memory", opts.KubevirtPlatform.Memory, "In KubeVirt platform - the amount of memory which is visible inside the Guest OS (type BinarySI, e.g. 5Gi, 100Mi)") | ||
| cmd.Flags().Uint32Var(&opts.KubevirtPlatform.CPU, "kubevirt-cpu", opts.KubevirtPlatform.CPU, "In KubeVirt platform - Cores specifies the number of cores inside the vmi, Must be a value greater or equal 1") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about node-cores or cores?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it to Cores
cmd/cluster/kubevirt/create.go
Outdated
|
|
||
| cmd.Flags().StringVar(&opts.KubevirtPlatform.Memory, "kubevirt-memory", opts.KubevirtPlatform.Memory, "In KubeVirt platform - the amount of memory which is visible inside the Guest OS (type BinarySI, e.g. 5Gi, 100Mi)") | ||
| cmd.Flags().Uint32Var(&opts.KubevirtPlatform.CPU, "kubevirt-cpu", opts.KubevirtPlatform.CPU, "In KubeVirt platform - Cores specifies the number of cores inside the vmi, Must be a value greater or equal 1") | ||
| cmd.Flags().StringVar(&opts.KubevirtPlatform.Image, "kubevirt-image", opts.KubevirtPlatform.Image, "In KubeVirt platform - Image is the name of the image with the embedded disk to be used to create the machines") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about containerdisk, rootdisk, node-containerdisk with a descriptin like:
OpenShift compatible kubevirt containerdisk with ignition support (e.g. RHCOS, FCOS)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it to containerdisk
api/v1alpha1/nodepool_types.go
Outdated
| // on KubeVirt platform. | ||
| type KubevirtNodePoolPlatform struct { | ||
| // VirtualMachineInstance Spec contains the VirtualMachineInstance specification. | ||
| VirtualMachineInstanceSpec kubevirtv1.VirtualMachineInstanceSpec `json:"vmispec,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I don't think that we can use vmispec. How about template?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed
| cmd.Flags().StringVar(&opts.RootVolumeType, "root-volume-type", opts.RootVolumeType, "The type of the root volume (e.g. gp2, io1) for machines in the NodePool") | ||
| cmd.Flags().Int64Var(&opts.RootVolumeIOPS, "root-volume-iops", opts.RootVolumeIOPS, "The iops of the root volume when specifying type:io1 for machines in the NodePool") | ||
| cmd.Flags().Int64Var(&opts.RootVolumeSize, "root-volume-size", opts.RootVolumeSize, "The size of the root volume (min: 8) for machines in the NodePool") | ||
| cmd.Flags().StringVar(&opts.KubevirtMemory, "kubevirt-memory", opts.KubevirtMemory, "In KubeVirt platform - the amount of memory which is visible inside the Guest OS (type BinarySI, e.g. 5Gi, 100Mi)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here the kubevirt-* prefix is probably correct, but I think this is a part of the commandline tool which I did not address in the None platform refactoring. We probably want sub-commands for this too. I think this refactoring can be done independent of this PR @enxebre what do you think?
go.mod
Outdated
| kubevirt.io/containerized-data-importer-api => github.com/kubevirt/containerized-data-importer-api v1.41.1-0.20211201033752-05520fb9f18d | ||
| sigs.k8s.io/apiserver-network-proxy/konnectivity-client => sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.24 | ||
| sigs.k8s.io/cluster-api => sigs.k8s.io/cluster-api v1.0.0 | ||
| sigs.k8s.io/cluster-api-provider-kubevirt => github.com/openshift/cluster-api-provider-kubevirt v0.0.0-20211222061349-10e498699601 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The replaces look just how I have hoped that they will. 👍
| return ctrl.Result{}, fmt.Errorf("failed to reconcile KubevirtMachineTemplate: %w", err) | ||
| } | ||
| span.AddEvent("reconciled kubevirtmachinetemplate", trace.WithAttributes(attribute.String("name", machineTemplate.GetName()))) | ||
| case hyperv1.NonePlatform: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a rebase leftover. the None platform is handled in isAutomatedMachineManagement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You right, removed it
|
|
||
| const ( | ||
| hostedClusterAnnotation = "hypershift.openshift.io/cluster" | ||
| imageCAPK = "registry.ci.openshift.org/hypershift/cluster-api-kubevirt-controller:1.0.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure. on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure. on it.
I wonder if we can somehow delete the 1.0.0 tag from the openshift registry. It can potentially be a source of confusion for a long time :)
a9ae305 to
b9c63b6
Compare
api/v1alpha1/nodepool_types.go
Outdated
| // on KubeVirt platform. | ||
| type KubevirtNodePoolPlatform struct { | ||
| // Template Spec contains the VirtualMachineInstance specification. | ||
| Template *kubevirtv1.VirtualMachineInstanceSpec `json:"template,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you want to go ahead and adopt the full VMTemplate as I've outlined here? kubernetes-sigs/cluster-api-provider-kubevirt#61
If we're pretty confident this is the api we're going to move forward with. we could adopt it now on the hypershift side and then not have to deal with mutating the hypershift api in a backwards incompatible way later on.
the change on the hypershift api would look like this.
// VirtualMachineTemplateSpec defines the desired state of the kubevirt VM.
type VirtualMachineTemplateSpec struct {
// +kubebuilder:pruning:PreserveUnknownFields
// +nullable
ObjectMeta metav1.ObjectMeta `json:"metadata,omitempty"`
// VirtualMachineSpec contains the VirtualMachine specification.
Spec kubevirtv1.VirtualMachineSpec `json:"spec,omitempty" valid:"required"`
}
// KubevirtNodePoolPlatform specifies the configuration of a NodePool when operating
// on KubeVirt platform.
type KubevirtNodePoolPlatform struct {
// Template Spec contains the VirtualMachine specification.
VirtualMachineTemplate *VirtualMachineTemplateSpec `json:"virtualMachineTemplate,omitempty"`
}
Until the capk PR merges, we'd simply only be using the virtualMachineTemplate.Spec.Template section when translating it to the capk KubeVirtMachine... but once capk adopts the full VMtemplate that would change.
I'm fine either way as long as making backwards incompatible changes to the hypershift NodePool api isn't going to cause us issues
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @davidvossel
I changed it according to last capk change
b9c63b6 to
1c74853
Compare
rmohr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One nit.
api/v1alpha1/nodepool_types.go
Outdated
| // on KubeVirt platform. | ||
| type KubevirtNodePoolPlatform struct { | ||
| // NodeTemplate Spec contains the VirtualMachineInstance specification. | ||
| NodeTemplate *capikubevirt.VirtualMachineTemplateSpec `json:"nodetemplate,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think to be consistent the json filed should be named nodeTemplate (capital T).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
1c74853 to
0da4c20
Compare
|
/lgtm |
| } | ||
| } | ||
|
|
||
| func generateNodeTemplae(memory string, cpu uint32, image string) *capikubevirt.VirtualMachineTemplateSpec { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: typo generateNodeTemplae
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
cmd/nodepool/create.go
Outdated
| RootVolumeType: "gp2", | ||
| RootVolumeSize: 120, | ||
| RootVolumeIOPS: 0, | ||
| // TODO nargaman need to move this into platform specific create file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: usually we use this convention ordered by preference:
// TODO (jira-1234): desc
// TODO (username): desc
// TODO: desc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed my TODO comments to this format (// TODO (username): desc)
| } | ||
|
|
||
| cmd.Flags().StringVar(&opts.KubevirtPlatform.Memory, "memory", opts.KubevirtPlatform.Memory, "The amount of memory which is visible inside the Guest OS (type BinarySI, e.g. 5Gi, 100Mi)") | ||
| cmd.Flags().Uint32Var(&opts.KubevirtPlatform.Cores, "cores", opts.KubevirtPlatform.Cores, "The number of cores inside the vmi, Must be a value greater or equal 1") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The number of cores inside the vmi, Must be a value greater or equal 1
Should we validate this below and fail otherwise? the way it's now zero would pass through right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added validation in applyPlatformSpecificsValues function
| testNamespace, | ||
| apiendpoint) | ||
| if fnCallsCount != 1 { | ||
| t.Fatalf("Expected the provided fuction to be called once") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: typo fuction
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
|
||
| func reconcileCAPIProviderRole(role *rbacv1.Role) error { | ||
| role.Rules = []rbacv1.PolicyRule{ | ||
| func reconcileCAPIProviderRole(role *rbacv1.Role, p platform.Platform) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is disrupting existing platform/behaviour, can we please have a unit test for it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added unit test, I used gomock to generate mock for the platform.Platform interface (gomock was also added to go.mod)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you run something manually to generate platform_generated.go that now we need to automate via Makefile?
If so may be just keep it simple for now, drop gomock and use kubevirt platform for the unit test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mock file is auto generated with go generate ./...
It can be added to the Makefile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, so we'd need to have a target for it and include a dep for the unit tests and verify targets. I'm not sure this is adding much value at this point since the interface and the logic are fairly simple. I'd rather just rely on kubevirt platform to unit test which adds also implicit value and avoid adding a new dep.
|
|
||
| func reconcileCAPIProviderRole(role *rbacv1.Role) error { | ||
| role.Rules = []rbacv1.PolicyRule{ | ||
| func reconcileCAPIProviderRole(role *rbacv1.Role, p platform.Platform) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider p to be a pointer to avoid copying by value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the assigned structures to be pointers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the change, I'm still seeing you pass p platform.Platform) here. That's ok, this is not a blocker to me anyways.
| // components for a given platform. | ||
| func (r *HostedClusterReconciler) reconcileCAPIProvider(ctx context.Context, createOrUpdate upsert.CreateOrUpdateFN, hcluster *hyperv1.HostedCluster, | ||
| capiProviderDeploymentSpec *appsv1.DeploymentSpec) error { | ||
| capiProviderDeploymentSpec *appsv1.DeploymentSpec, p platform.Platform) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider p to be a pointer to avoid copying by value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the assigned structures to be pointers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the change, I'm still seeing you pass p platform.Platform) here. That's ok, this is not a blocker to me anyways.
|
Thanks @nirarg, this looks great to me other than #779 (comment). |
59a6809 to
c17ef0d
Compare
| // on KubeVirt platform. | ||
| type KubevirtNodePoolPlatform struct { | ||
| // NodeTemplate Spec contains the VirtualMachineInstance specification. | ||
| NodeTemplate *capikubevirt.VirtualMachineTemplateSpec `json:"nodeTemplate,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should omitempty here. Not a blocker.
| switch hcluster.Spec.Platform.Type { | ||
| case hyperv1.AWSPlatform: | ||
| platform = aws.AWS{} | ||
| platform = &aws.AWS{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done for this comment: #779 (comment)
The parameter passed in this function is interface, so changed the struct assignment to pointer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, sorry I was referring to the reconcileCAPIProvider signature where we could pass p as pointer instead, just a minor thought. I can't see how does this change makes any difference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but p passed in the function is interface, you shouldn't change the interface to pointer but you can assign pointer to it, I think that do the job and prevent the copy, am I wrong?
…roller 1) Create capk-controller-manager 2) Add KubevirtCluster creation in hostedCluster reconcilation
* Add new KubevirtNodePoolPlatform in case platform type is KubeVirt * Create VirtualMachineTemplateSpec inside KubevirtNodePoolPlatform
c17ef0d to
dea6ec3
Compare
|
Thanks a lot @nirarg! Let's follow up providing the nodepool cli platform awareness and getting into CI/docs. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: enxebre, nirarg The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@nirarg: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
…en comment Minor fixes for code added in PR openshift#779 * Add kubevirt platform object to interface validations lines * Remove GoMock generate comment left in the code and not used
…en comment Minor fixes for code added in PR openshift#779 * Add kubevirt platform object to interface validations lines * Remove GoMock generate comment left in the code and not used
…en comment Minor fixes for code added in PR openshift#779 * Add kubevirt platform object to interface validations lines * Remove GoMock generate comment left in the code and not used
This PR adds KubeVirt platform as a new platform to hypershift cluster
This allows adding Kubevirt VMs as workers to hypershift cluster
This PR also introduce the new
cluster-api-provider-kubevirt:https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt
The following fork is used for api and docker images creation: https://github.com/openshift/cluster-api-provider-kubevirt