Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(vmm): cloud-hypervisor support #1838

Closed
rinor opened this issue Apr 9, 2023 · 5 comments · Fixed by #1855
Closed

feat(vmm): cloud-hypervisor support #1838

rinor opened this issue Apr 9, 2023 · 5 comments · Fixed by #1855
Assignees

Comments

@rinor
Copy link
Contributor

rinor commented Apr 9, 2023

Was trying to play around with nanos on https://github.com/cloud-hypervisor/cloud-hypervisor , but it looks like there are some issues. Assuming it's not a local problem on my environment, still troubleshooting, do we know what is preventing nanos to work there?

i.e:

cloud-hypervisor \
  --cpus "boot=1,max=1,max_phys_bits=48" \
  --memory "size=1024M" \
  --rng "src=/dev/urandom" \
  --kernel nanos/output/platform/pc/bin/kernel.img \
  --disk "path=/home/ops/.ops/images/ch-nanos,direct=on,id=nanos" \
  --cmdline "console=ttyS0 reboot=k panic=1 pci=off" \
  --net "offload_tso=off,offload_ufo=off,offload_csum=off,tap=,mac=,ip=,mask=" \
  --seccomp false \
  --console tty \
  --log-file ./ch-nanos.log \
  -v -v -v

While looking around stumbled across cloudius-systems/osv@4fa1483, jic it's relevant in here.

@francescolavra
Copy link
Member

I didn't look into it, but since cloud-hypervisor is based on VMM, and nanos supports AWS Firecracker which is also based on VMM, I think it shouldn't be much work to get nanos to boot under cloud-hypervisor (we also support booting in PVH mode). The ACPI issue you mentioned might be relevant.

@rinor
Copy link
Contributor Author

rinor commented Apr 16, 2023

Well probably on Firecracker it works (kind of) because afaik Firecracker does not implement/support ACPI, which is used by nanos for power handling and CPU(s) discovery.

Hence nanos, from what I understand, being unable to process MP detection without ACPI support, (couldn't find anything related to MP Floating Pointer Structure/MP Configuration Table parsing), falls back to single CPU by default (no matter the VM config you provide to it)

Firecracker nanos

we pass 4 vcpus, but still nanos starts just 1

  "machine-config": {
    "vcpu_count": 4,
    "cpu_template": "None",
    "smt": false,
    "mem_size_mib": 2000,
    "track_dirty_pages": true
  }

nanos output:

INIT: init_service
INIT: physical memory:
INIT:  [0000000000400000, 000000007ceff000)
INIT: parsing cmdline
INIT: in init_service_new_stack
INIT: init_hwrand
INIT: init cpu features
INIT: calling kernel_runtime_init
ACPI: find_rsdp: could not find valid RSDP
ACPI: AcpiInitializeTables returned 5
APIC: walking MADT table...
APIC: MADT not found, detecting apic interface...
x2APIC: x2APIC detected
APIC: using x2APIC interface
x2APIC: per cpu init, writing 0xc00
x2APIC: read from reg 0x803
x2APIC:  -> read 0x50014
x2APIC: read from reg 0x802
x2APIC:  -> read 0x0
x2APIC: apic id 0, apic ver 50014
x2APIC: write to reg 0x80f, val 0x120
x2APIC: write to reg 0x835, val 0x10000
x2APIC: write to reg 0x836, val 0x10000
x2APIC: write to reg 0x837, val 0x21
x2APIC: write to reg 0x832, val 0x40024
INIT: KVM detected
warning: ACPI MADT not found, default to 1 processor
INIT: init_mxcsr
INIT: starting APs
INIT: started 1 total processors
INIT: hypervisor undetected or HVM platform; registering all PCI drivers...
x2APIC: read from reg 0x802
x2APIC:  -> read 0x0
x2APIC: read from reg 0x802
x2APIC:  -> read 0x0
ACPI: AcpiEnableSubsystem returned 2
x2APIC: write to reg 0x80b, val 0x0
x2APIC: write to reg 0x80b, val 0x0
x2APIC: write to reg 0x80b, val 0x0
x2APIC: write to reg 0x80b, val 0x0
x2APIC: write to reg 0x80b, val 0x0
x2APIC: write to reg 0x80b, val 0x0

@francescolavra francescolavra self-assigned this May 12, 2023
francescolavra added a commit that referenced this issue May 17, 2023
The pvh_start32 assembly code is a 32-bit entry point for the
kernel, used by hypervisors such as QEMU (with the microvm machine
type) and cloud-hypervisor
(https://github.com/cloud-hypervisor/cloud-hypervisor).
The current code was missing the initialization of the stack, which
prevents the kernel from booting under cloud-hypervisor.
This change adds the code to set the stack pointer register and to
map the memory area to be used as initial stack during boot; this
allows booting under cloud-hypervisor.

Closes #1838.
@francescolavra
Copy link
Member

With #1855, Nanos is able to boot under cloud-hypervisor.
Example command line:
cloud-hypervisor --kernel nanos/output/platform/pc/bin/kernel.img --disk "path=/home/ops/.ops/images/ch-nanos" --net "tap=" --console off --serial tty
In the above command, the hypervisor is configured to use the output of the serial port (instead of the virtio console device) as standard output, so that the messages output by the kernel and the user program are visible on the terminal.
You will notice a "warning: ACPI MADT not found, default to 1 processor" message in the terminal: that's because of a bug in cloud-hypervisor, and more precisely in the acpi_tables crate; this bug has been recently fixed in rust-vmm/acpi_tables@b9b34ba, but the fix hasn't made it to a cloud-hypervisor release yet.

@rinor rinor changed the title question(VMM): cloud-hypervisor support feat(vmm): cloud-hypervisor support May 17, 2023
@rinor
Copy link
Contributor Author

rinor commented May 17, 2023

From a quick test with the official releases, now it boots fine. May test with a CH custom release including the fixed crate ...

Thank you.

@rinor
Copy link
Contributor Author

rinor commented May 18, 2023

Tested also with the latest CH main branch that includes the updated crate, and indeed nanos boots correctly with all provided cpu(s).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants