Skip to content

Conversation

@Prashanth684
Copy link
Contributor

Add support to specify firmware and NVRAM file for UEFI architectures like aarch64. This is very similar to what is done
in the terraform-provider repo [1]. This will allow for these options to be specified in the terraform files in the openshift
installer and enable libvirt machine creation.

[1] dmacvicar/terraform-provider-libvirt@44a504b

Add support to specify firmware and NVRAM file for UEFI architectures like aarch64. This is very similar to what is done
in the terraform-provider repo [1]. This will allow for these options to be specified in the terraform files in the openshift
installer and enable libvirt machine creation.

[1] dmacvicar/terraform-provider-libvirt@44a504b
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign michaelgugino after the PR has been reviewed.
You can assign the PR to them by writing /assign @michaelgugino in a comment when ready.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Prashanth684
Copy link
Contributor Author

@praveenkumar @cfergeau some context here - we are working on enabling ARM support in the near future, so this would be needed. I have tested this with relevant changes to the installer code on an aarch64 system.

spice graphics is not supported for yet another arch
@Prashanth684
Copy link
Contributor Author

/test e2e-libvirt

@praveenkumar
Copy link
Contributor

/retest

Copy link
Contributor

@cfergeau cfergeau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that libvirt will be able to pick up a suitable default for firmware/nvram most of the time, even on aarch64? But I'll admit I'm not familiar with aarch64 virt :) Can you describe a bit more the issue this is solving? What would be using this libvirt aarch64 support? Baremetal deployments? Or just testing envs?

@Prashanth684
Copy link
Contributor Author

Prashanth684 commented Dec 1, 2020

My understanding is that libvirt will be able to pick up a suitable default for firmware/nvram most of the time, even on aarch64? But I'll admit I'm not familiar with aarch64 virt :) Can you describe a bit more the issue this is solving? What would be using this libvirt aarch64 support? Baremetal deployments? Or just testing envs?

So, the virt-install command has an option --boot firmware=efi which will let it know that it is EFI and virt-install will generate the xml with the appropriate path to the firmware file, but in our case we have to populate the xml with the EFI file path. The NVRAM file on the other hand is optional. In /etc/libvirt/qemu.conf there is an NVRAM section which has the defaults which can be enabled so that it will pick the correct file. I included it for completeness (mimicking the terraform provider) and in case we need it in the future. So, this change will allow us to make changes to the installer terraform files to specify the firmware file which will be used to create the VM.

As for the problem we are solving - there will be three deployments of ARM:

  • on AWS
  • baremetal
  • libvirt

Right now we are using a UPI solution for libvirt which is cumbersome to setup and test deploys quickly. Which is why it would be good to have IPI support - mainly for quick testing and a lightweight CI solution. But for now the purpose is just dev envs.

// Same case for PowerPC systems as well
if !strings.HasPrefix(arch, "s390") && !strings.HasPrefix(arch, "ppc64") {
switch arch {
case "s390", "s390x", "ppc64", "ppc64le", "aarch64":

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On which architectures is SPICE supported? I'm wondering if this shouldn't be changed so that new architectures don't need to continually be added here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i thought it's better to blacklist so we don't miss any arches. https://www.spice-space.org/faq.html claims support for arm, but libvirt still threw an error saying SPICE graphics is not supported for qemu, so not sure about that.

But yes - this is not ideal - maybe it would be better to just enable it for x86_64. @cfergeau thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have upstream support, and Red Hat support, which can be different things. On RHEL, I see that it's built for x86_64 and aarch64. I think there has been work to make spice support optional, so it may depend on which packages are installed.
Since we also add a console device just before the spice graphics element, it's probably fine to only add it on x86_64 for now. If the console is not enough for debugging purposes, in the future, we can add a vnc graphics element on platforms where spice is disabled.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup..so looks like qemu-kvm in RHEL is not compiled with spice for non-x86..just looked at the spec file in the rpm:

%ifarch x86_64
    %global kvm_target    x86_64
%else
    %global have_spice   0
    %global have_opengl  0
    %global have_gluster 0

i can put a request to have it enabled for aarch64, but in the meanwhile i think I will change the conditional here to only support it only for x86.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i will open a separate PR for this

@cfergeau
Copy link
Contributor

cfergeau commented Dec 2, 2020

So, the virt-install command has an option --boot firmware=efi which will let it know that it is EFI and virt-install will generate the xml with the appropriate path to the firmware file, but in our case we have to populate the xml with the EFI file path.

I'm wondering if <os firmware='efi'> https://libvirt.org/formatdomain.html#bios-bootloader could be used to trigger the same autoselection mechanism as virt-install? I'm not opposed to adding paths to the firmware/nvram if that's required, but ideally the simpler solution would be enough?

@cfergeau
Copy link
Contributor

cfergeau commented Dec 2, 2020

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 2, 2020
Prashanth684 added a commit to Prashanth684/cluster-api-provider-libvirt that referenced this pull request Dec 3, 2020
For aarch64, we need to specify the firmware to be efi so the loader and the nvram is correctly
populated with the appropriate files for machine creation.

This actually started in openshift#213 similar to what was done in the terraform provider, but the newer
libvirt-go-xml libraries support setting of this attribute making things easier
@Prashanth684
Copy link
Contributor Author

So, the virt-install command has an option --boot firmware=efi which will let it know that it is EFI and virt-install will generate the xml with the appropriate path to the firmware file, but in our case we have to populate the xml with the EFI file path.

I'm wondering if <os firmware='efi'> https://libvirt.org/formatdomain.html#bios-bootloader could be used to trigger the same autoselection mechanism as virt-install? I'm not opposed to adding paths to the firmware/nvram if that's required, but ideally the simpler solution would be enough?

Yes! thanks for this suggestion. Turns out the firmware attribute is supported in newer libvirt libraries. I tested out and it works well. I have opened #214. If you think those changes are good i will close this PR.

@Prashanth684
Copy link
Contributor Author

closing this as i think #214 and #215 address all the issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants