-
Notifications
You must be signed in to change notification settings - Fork 438
Kubevirt platform #779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubevirt platform #779
Changes from all commits
dd43a7c
6c7851f
ce62aa5
736f588
dea6ec3
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4,6 +4,7 @@ import ( | |
| v1 "k8s.io/api/core/v1" | ||
| metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
| "k8s.io/apimachinery/pkg/util/intstr" | ||
| capikubevirt "sigs.k8s.io/cluster-api-provider-kubevirt/api/v1alpha1" | ||
| ) | ||
|
|
||
| const ( | ||
|
|
@@ -286,6 +287,19 @@ type NodePoolPlatform struct { | |
|
|
||
| // IBMCloud defines IBMCloud specific settings for components | ||
| IBMCloud *IBMCloudPlatformSpec `json:"ibmcloud,omitempty"` | ||
|
|
||
| // Kubevirt specifies the configuration used when operating on KubeVirt platform. | ||
| // | ||
| // +optional | ||
| // +immutable | ||
| Kubevirt *KubevirtNodePoolPlatform `json:"kubevirt,omitempty"` | ||
| } | ||
|
|
||
| // KubevirtNodePoolPlatform specifies the configuration of a NodePool when operating | ||
| // on KubeVirt platform. | ||
| type KubevirtNodePoolPlatform struct { | ||
| // NodeTemplate Spec contains the VirtualMachineInstance specification. | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't have strong opinions here but curious if you have considered passing
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also should
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ignore the required comment, it's already unless indicated otherwise.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I think being able to set annotations and labels is pretty important but also kubevirt platform specific. Therefore I think it makes sense to also have the objectmeta.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes we need ObjectMeta on the KubeVirt VM itself that is independent of the NodePool. Even the Namespace is important to us. Here's one reason why Right now the kubevirt provider is assumed to be starting kubevirt VMs within the management cluster, that's not guaranteed to be the case in the future though. We're open to the possibility that the NodePool will be creating kubevirt machines in an external cluster. In this case, the namespace within the ObjectMeta on the VM here actually represents the namespace on the external cluster to launch the KubeVirt VM. |
||
| NodeTemplate *capikubevirt.VirtualMachineTemplateSpec `json:"nodeTemplate,omitempty"` | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think we should omitempty here. Not a blocker. |
||
| } | ||
|
|
||
| // AWSNodePoolPlatform specifies the configuration of a NodePool when operating | ||
|
|
||
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,91 @@ | ||
| package kubevirt | ||
|
|
||
| import ( | ||
| "context" | ||
| "errors" | ||
| "fmt" | ||
| "os" | ||
| "os/signal" | ||
| "syscall" | ||
|
|
||
| apifixtures "github.com/openshift/hypershift/api/fixtures" | ||
| "github.com/openshift/hypershift/cmd/cluster/core" | ||
| "github.com/spf13/cobra" | ||
| utilrand "k8s.io/apimachinery/pkg/util/rand" | ||
| ) | ||
|
|
||
| func NewCreateCommand(opts *core.CreateOptions) *cobra.Command { | ||
| cmd := &cobra.Command{ | ||
| Use: "kubevirt", | ||
| Short: "Creates basic functional HostedCluster resources for KubeVirt platform", | ||
| SilenceUsage: true, | ||
| } | ||
|
|
||
| opts.KubevirtPlatform = core.KubevirtPlatformCreateOptions{ | ||
| Memory: "4Gi", | ||
| Cores: 2, | ||
| ContainerDiskImage: "", | ||
| } | ||
|
|
||
| cmd.Flags().StringVar(&opts.KubevirtPlatform.Memory, "memory", opts.KubevirtPlatform.Memory, "The amount of memory which is visible inside the Guest OS (type BinarySI, e.g. 5Gi, 100Mi)") | ||
| cmd.Flags().Uint32Var(&opts.KubevirtPlatform.Cores, "cores", opts.KubevirtPlatform.Cores, "The number of cores inside the vmi, Must be a value greater or equal 1") | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Should we validate this below and fail otherwise? the way it's now zero would pass through right?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Added validation in |
||
| cmd.Flags().StringVar(&opts.KubevirtPlatform.ContainerDiskImage, "containerdisk", opts.KubevirtPlatform.ContainerDiskImage, "A reference to docker image with the embedded disk to be used to create the machines") | ||
|
|
||
| cmd.Run = func(cmd *cobra.Command, args []string) { | ||
| ctx, cancel := context.WithCancel(context.Background()) | ||
| sigs := make(chan os.Signal, 1) | ||
| signal.Notify(sigs, syscall.SIGINT) | ||
| go func() { | ||
| <-sigs | ||
| cancel() | ||
| }() | ||
|
|
||
| if err := CreateCluster(ctx, opts); err != nil { | ||
| log.Error(err, "Failed to create cluster") | ||
| os.Exit(1) | ||
| } | ||
| } | ||
|
|
||
| return cmd | ||
| } | ||
|
|
||
| func CreateCluster(ctx context.Context, opts *core.CreateOptions) error { | ||
| return core.CreateCluster(ctx, opts, applyPlatformSpecificsValues) | ||
| } | ||
|
|
||
| func applyPlatformSpecificsValues(ctx context.Context, exampleOptions *apifixtures.ExampleOptions, opts *core.CreateOptions) (err error) { | ||
| if opts.KubevirtPlatform.APIServerAddress == "" { | ||
| if opts.KubevirtPlatform.APIServerAddress, err = core.GetAPIServerAddressByNode(ctx); err != nil { | ||
| return err | ||
| } | ||
| } | ||
|
|
||
| if opts.NodePoolReplicas > -1 { | ||
| // TODO (nargaman): replace with official container image, after RFE-2501 is completed | ||
| // As long as there is no official container image | ||
| // The image must be provided by user | ||
| // Otherwise it must fail | ||
| if opts.KubevirtPlatform.ContainerDiskImage == "" { | ||
| return errors.New("the container disk image for the Kubevirt machine must be provided by user (\"--containerdisk\" flag)") | ||
| } | ||
| } | ||
|
|
||
| if opts.KubevirtPlatform.Cores < 1 { | ||
| return errors.New("the number of cores inside the machine must be a value greater or equal 1") | ||
| } | ||
|
|
||
| infraID := opts.InfraID | ||
| if len(infraID) == 0 { | ||
| infraID = fmt.Sprintf("%s-%s", opts.Name, utilrand.String(5)) | ||
| } | ||
| exampleOptions.InfraID = infraID | ||
| exampleOptions.BaseDomain = "example.com" | ||
|
|
||
| exampleOptions.Kubevirt = &apifixtures.ExampleKubevirtOptions{ | ||
| APIServerAddress: opts.KubevirtPlatform.APIServerAddress, | ||
| Memory: opts.KubevirtPlatform.Memory, | ||
| Cores: opts.KubevirtPlatform.Cores, | ||
| Image: opts.KubevirtPlatform.ContainerDiskImage, | ||
| } | ||
| return nil | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi, in the future we might need to reconsider RunStrategy. When we use runStrategy: Always, it means that the kubevirt VM could crash and recover with a new IP. If we need to avoid that, we'll have to consider a way to support runStrategy:Manual.
This is fine for now. Just be aware that there are likely some issues for us to follow up on here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general I would even consider changing it in kubevirt itself, so that we restart the VMI in-place. Rescheduling has a lot of overhead. All other k8s controllers avoid that and restart in-place. Potentially has impact on how professional your monitoring has to be though.