-------------------------------------------------------- ------------------ OpenXT EFI Guide -------------------- -------------------------------------------------------- Author: Will Ronchetti Email: wrr33@cornell.edu Github: willronchetti This document serves as a guide for someone looking to test out PR #729 - Adding Nested Virt and Guest UEFI Support. To start, you will need to build OpenXT with the patches integrated. Merging the PR should and running a build like usual should do it. It is possible that as new commits come in that patch conflicts come up. In the event of this it should not be terrible to fix them up manually so long as there is a stable build available. A build off of master *should* build out of the box, but if does not this is something that needs to be fixed. Once you have the build, you can perform an install or upgrade from a current version of OpenXT. I have not detected any problems with existing VM's or anything like that when upgrading to the new functionality.That is, you should be able to boot into your old VM's without any trouble. If you find any new issues at all with previously installed VM's, those would be worth reporting. -------------------------------------------------------- ----------------- Windows EFI/Nested Virt -------------- -------------------------------------------------------- There are some known issues with the build as of now that will need rectifying. At the moment, a standard UEFI install from a .iso file can take an extremely long time (for Linux distros anywhere from 5-15 minutes, for Windows >30 minutes). There are experimental q35 patches that may help, but these have not been tested. With that said, you can still install UEFI guests through the iso images. The only pre-req step is to enable ovmf by running 'xec-vm -n -x ovmf true'. This will trigger ovmf rather than the traditional bios upon booting the guest. To test out Windows 10 nested virtualization, download a Windows 10 Enterprise image and drop it in /storage/isos. You will need to boot into another VM with a disk partition manager. Add another 5Gb hard drive to that VM. When inside the VM, format the vhd you just added to be GPT/FAT32, then extract the contents of Windows.iso into the vhd. From there, you can change the virtual hard drive path of your new 'win10efi' VM to point to that vhd. Run the following commands (Windows specific, except for ovmf): *) xec-vm -n win10efi -x cpuid '[ "0x1:ecx=0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","2147483655:edx=xxxxxxxxxxxxxxxxxxxxxxx1xxxxxxxx" ]' *) xec-vm -n win10efi -x nestedhvm true *) xec-vm -n win10efi -x viridian false *) xec-vm -n win10efi -x extra-xenvm "nomigrate=1" *) xec-vm -n win10efi -x ovmf true Then, boot the VM - you should see EDK come up, at which point it will boot into the Windows installer. You can then install Windows as usual. Once Windows is installed, you can easily enable Hyper-V by navigating to the 'Turn Windows Features on or off' menu item in search. Select Hyper-V, then reboot. When Windows comes back up, you should be able to use Hyper-V's VMM to spawn VM's. To test out secure boot, you will need that same vhd you used for the install (easiest route - manual configuration is possible particularly on Linux but is likely difficult on Windows). The OVMF build produces an efi executable called 'EnrollDefaultKeys.efi'. Locate this file and copy it over to your vhd. Boot back into the EDK shell. You should be able to run 'EnrollDefaultKeys.efi' from the shell, which will configure and enable secure boot for you. From there you can boot into Windows and enable Credential Guard (at your own risk). We found that Credential Guard exhibits max cpu usage and essentially cripples the system. This is supposedly a known issue with Citrix. -------------------------------------------------------- ---------------- Linux Guest EFI ----------------------- -------------------------------------------------------- Testing out Linux Guest EFI is a good bit easier than Windows. You can more easily use iso images, but the vhd route described above will be much faster. Most Linux distros have EFI support now, so you should be able to enable ovmf and run their installers without a problem. Any problems should be noted. Tested distros so far are Ubuntu (boot from file), Fedora, Debian and CentOS. There is a 3rd option for testing - you can create minimal bootable disk images using systemd's mkosi on Fedora 26. Fedora 26 is most convenient because it has the newest version of systemd, and will allow you to build a few different distros as Fedora is able to utilize other package managers (pacman, dnf, yum, apt). A brief guide on how to do so follows. *) Install Fedora 26, either from PXE or another source. *) Install mkosi from source, be sure to pull down the new CentOS support if you want that (not bootable, though). *) Install systemd-container using 'dnf install systemd-container'. *) Install qemu-img using 'yum install qemu-img'. *) Create a new directory to be used to create your image. Inside that directory, create a 'mkosi.cache' directory. This will significantly speed up builds after the first one. *) To create a new boot-able UEFI image, run the following command 'mkosi -t raw_btrfs -d --bootable --password= -o '. *) This will produce an which is a raw disk image. Format that image into a vhd by running 'qemu-img convert -f raw -O vpc -o subformat=fixed,force_size '. *) You now have a vhd image usable on OpenXT. Scp the image into /storage/disks. *) Create a new VM and create an additional 1-2Gb disk, depending on the distro. *) Run 'xec-vm -n -x ovmf true' to enable EFI. *) Locate the disk slot by running 'xec-vm -n -k get phys-path'. *) Change the new disks physical path to point to the vhd you just created. *) Boot into OVMF, and your distro should boot. More information is readily available on Github, but the guide above should get you started if you're looking to quickly test out EFI. To configure secure boot on Linux Guests, you should be able to run the same OVMF tool described above. You can also configure this manually, but steps vary distro to distro. Note that mkosi can technically do this signing for you, the library they use (sbsigntool) appears to be broken on Fedora 26 (likely due to a newer version of gcc). This functionality does work on Fedora 24, though, but compatibility with all distros may not be guaranteed. The mkosi docs describe how to configure secure boot signing. If you do go that route, you'll simply need to enroll the PK in ovmf in order to actually use secure boot. --------------------------------------------------------- ---------------- Final Notes ---------------------------- --------------------------------------------------------- There is an additional doc included that serves as the original source of notes on how to configure a lot of the above. I have taken the liberty to provide this document in addition, as I feel that the other document provided was not as clear as it could be. With that said, when in doubt, or if something isn't working, consult the other document and try a few different configurations and report back if you find one that appears to be broken (or one that works). Note that this document speaks of additional work that is not included in this patch set.