-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Insmod rustyvisor.ko blocks forever #54
Comments
Hi, It looks like the strange output I was seeing in dmesg is due to a strange behaviour of the dmesg logger (the hypervisor was trying to log "rustyvisor_load", but the logger used a single printk() for each character --- and printk() automatically adds a CR --- and prepended some additional characters). I attach a patch that partially fixes this behaviour. 0001-Remove-deadlock-in-module-s-init-cleanup.txt After fixing the logger, I realized that the hypervisor did not start... After investigating a little bit, I discovered this happens because it refuses to start inside a VM. Since QEMU/KVM supports nested virtualization, I removed this check: Now, I get an error due to stack corruption; I'll try to investigate it in next week. Luca |
Hi Luca, thanks for your patches! I apologize for the exceptionally poor state of the linux version of rustyvisor at the moment. You're right that the printk dmesg logger is completely busted, but it's actually even worse than it seems at first -- if the guest kernel causes a vm exit while holding the dmesg lock which printk uses, and then we call printk in the kernel image from the hypervisor host, we'll probably deadlock. Honestly, the dmesg_logger is a giant debugging hack and should go away. Apologies for not calling out this particular hack in the code. You might look at using the serial logger directly. I don't know what I was thinking with the module init code, that's also busted. I will take a look at that patch later when I have time. The third issue where rustyvisor refuses to work inside a hypervisor is just silly. Thank you for your patch. You are of course also welcome to submit your changes via a github PR as well. I'm busy starting a new job right now and unfortunately can't commit to when I will make the changes to rustyvisor. Thanks again. |
I just compiled the Linux kernel module and tried to test it (in a KVM VM).
But when I do
sudo /sbin/insmod rustyvisor.ko
the insmod command blocks forever...
I think the issue is in the
down(&init_lock);
calls in rustyvisor_init(): who does the "up()"? It seems to me that no "up()" is performed on the semaphore, so insmod blocks and never wakes up.
I tried the following patch:
diff --git a/linux/src/linux_module.c b/linux/src/linux_module.c
index 006b5d9..959281e 100644
--- a/linux/src/linux_module.c
+++ b/linux/src/linux_module.c
@@ -7,7 +7,6 @@
#define MODULE_NAME "Rustyvisor"
-struct semaphore init_lock;
atomic_t failure_count;
@@ -33,18 +32,15 @@ uintptr_t rustyvisor_linux_virt_to_phys(void *virt) {
static void rustyvisor_linux_unload_all_cores(void) {
int cpu;
struct task_struct *task;
}
@@ -60,20 +56,16 @@ static int __init rustyvisor_init(void) {
rustyvisor_load();
and insmod completes without issues. The module prints some strange messages in dmesg:
[ 13.921514] 6
[ 13.921520] cr
[ 13.921875] cu
[ 13.922133] cs
[ 13.922467] ct
[ 13.922704] cy
[ 13.922980] cv
[ 13.923218] ci
[ 13.923472] cs
[ 13.923773] co
[ 13.924049] cr
[ 13.924319] c_
[ 13.924583] cl
[ 13.924885] co
[ 13.925144] ca
[ 13.925438] cd
[ 13.925705] 6
[ 13.925957] c
[ 13.926197] c
and then I do not know how to test the hypervisor... I can remove the module without crashes (again, it prints some strange characters).
I also tried the UEFI variant of the hypervisor, but after loading rustyvisor.efi in the UEFI shell nothing seems to happen (and running rustyvctl.efi always reports version 0 of the hypervisor, regardless of the fact that I loaded rustyvisor.efi or not). I am pretty sure I am doing something wrong, here... How can I test the hypervisor in action?
Luca
The text was updated successfully, but these errors were encountered: