Skip to content
warewolf edited this page Feb 22, 2013 · 4 revisions

How-to

Host Setup

  • Install OS on VM server system

This setup was developed on Fedora 16 and later, but any distro that supports libvirt/virt-manager, and related utilities can be used. Make sure you reserve a partition for space for a separate LVM volume group for VMs, or put the VG on a separate disk

  • Create dedicated 100G+ "analysis" LVM volume group for virtual machines

  • Install virtualization support

yum groupinstall -y Virtualization
for foo in libvirtd ksm ksmtuned; do systemctl enable $foo.service; systemctl start $foo.service;done
  • Install vnc-reflector, xinetd and rfbproxy for screencasting
yum install -y vnc-reflector xinetd perl-Sys-Virt perl-Sys-Guestfs git
  • Install perl modules for talking to libvirt/libguestfs
yum install -y perl-Sys-Virt perl-Sys-Guestfs perl-XML-LibXML perl-hivex perl-Archive-Zip
  • Disable selinux
$ vi /etc/sysconfig/selinux
SELINUX=disabled

Enable unpriveleged access to libvirt for virtual machine management

  • Reconfigure libvirt config and add/modify the lines below
$ vi /etc/libvirt/libvirtd.conf
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
unix_sock_dir = "/var/run/libvirt"
auth_unix_ro = "none"
auth_unix_rw = "none"
  • Create a group for restricting access to libvirt - jdoe is an example authorized user
$ groupadd -r libvirt
$ usermod -aG libvirt jdoe
  • Grant permission to qemu files (disk images, etc)
$ usermod -aG qemu jdoe
  • Set global defaults so libvirt clients connect to the right place
$ vi /etc/environment
LIBVIRT_DEFAULT_URI=qemu:///system
LIBGUESTFS_PATH=/var/lib/libguestfs/appliance
# this following line /may/ be causing a bug to be triggered - https://bugzilla.redhat.com/show_bug.cgi?id=909619 
#LIBGUESTFS_ATTACH_METHOD=libvirt
# You may need the following line instead
#LIBGUESTFS_ATTACH_METHOD=appliance
  • Prepare cached libguestfs appliance directory - this speeds up libguestfs's initial boot (and saves some disk space)
$ mkdir /var/lib/libguestfs
$ chgrp libvirt /var/lib/libguestfs
$ chmod 775 /var/lib/libguestfs
$ pushd /var/lib/libguestfs
$ wget http://libguestfs.org/download/binaries/appliance/appliance-1.20.0.tar.xz
$ tar -xf appliance-1.20.0.tar.xz
$ chown root:libvirt appliance -Rf
$ find appliance -type f -exec chmod a+r {} +
$ find appliance -type d -exec chmod a+rx {} +
$ popd

OPTIONAL: Tweak LibVirt for network migration of virtual machines (only req'd for cluster)

  • FIXME: document shared storage configuration (gfs2, nfs, etc)
$ vi /etc/libvirt/libvirtd.conf
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "sasl"
  • Reconfigure sysconfig init-script
$ vi /etc/sysconfig/libvirtd
LIBVIRTD_ARGS="--listen"
  • Add a user account in sasl password database
$ saslpasswd2 -a libvirt johndoe

Configure memory deduplicaiton (KSM)

$ vi /etc/ksmtuned.conf
KSM_THRES_CONST=the amount of RAM of your malware analysis VM uses
KSM_THRES_COEF=40 # 40% 
KSM_MONITOR_INTERVAL=10 # 10 seconds
  • Restart ksmtuned to apply these changes
$ systemctl restart ksmtuned.service

Create the malware analysiss isolated virtual network in virt-manager

  • Configure "malware" virtual network libvirt for MITM virtual machine

Note: virt-manager doesn't "know" how to create a virtual network where it isn't managing DHCP/DNS/etc. We want our MITM box to handle those services, so the virtual network needs to be defined manually.

  • malware.xml:
<network>
  <name>malware</name>
  <bridge name='malwarebr0' stp='off' delay='0' />
</network>
  • Define network
$ virsh net-define malware.xml; virsh net-autostart malware; virsh net-start malware
  • Mount ram drive on VM host
$ vi /etc/fstab
/tmp/ram /tmp/ram tmpfs rw,mode=777,gid=libvirt 0 0
$ mkdir /tmp/ram ; mount /tmp/ram
  • Define ram pool in libvirt
  • ram.xml:
<pool type='dir'>
  <name>ram</name>
  <target>
    <path>/tmp/ram</path>
    <permissions>
      <mode>0770</mode>
    </permissions>
  </target>
</pool>
  • Define & start RAM pool
$ virsh pool-define ram.xml; virsh pool-autostart ram; virsh pool-start ram
  • Add "analysis" LVM group as a storage pool in virt-manager - set to auto-start on boot
  • Right click localhost (QEMU)
  • Select Details
  • Select the Storage tab
  • Click the plus sign to add a new storage pool
  • Name it 'vm'
  • Type: logical: LVM volume group
  • Click forward
  • Target Path: your "analysis" volume group /dev/vg_something
  • Click finish

Note for Fedora 18 and later (or virt-manager 0.9.4 or later): If you create logical volumes through virt-manager, ensure you specify the same size for 'Max Capacity' and 'Allocation'. Otherwise virt-manager will attempt to create LVM snapshots, and that just ends up being a management nightmare.

Or try creating/modifying this XML:

  • Change @@LVM_PHYSICAL_VOLUME@@ to the device your volume group is on, e.g. /dev/sdb3
  • Change @@LVM_VOLUME_PATH@@ to the place your volume group's logical volumes show up, e.g. /dev/vg_analysis
  • Change @@QEMU_GROUP_ID@@ to the numeric group id of qemu

SHOWSTOPPER BUG: volumes created under this need to be readable by non-root users, in group qemu or libvirt.

Note: 12-dm-permissions.rules has this in it which should be helpful, but does not appear to be. :(

ACTION!="add|change", GOTO="dm_end"
ENV{DM_UDEV_RULES_VSN}!="?*", GOTO="dm_end"
ENV{DM_VG_NAME}=="VolGroup00", OWNER:="root", GROUP:="qemu", MODE:="660"
LABEL="dm_end"
  • vm.xml
<pool type='logical'>
  <name>vm</name>
  <source>
    <device path='@@LVM_PHYSICAL_VOLUME@@'/>
    <name>vg_analysis</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>@@LVM_VOLUME_PATH@@</path>
    <permissions>
      <mode>0770</mode>
      <group>@@QEMU_GROUP_ID@@</group>
    </permissions>
  </target>
</pool>
  • Start analysis volume group
$ virsh pool-define vm.xml; virsh pool-autostart vm; virsh pool-start vm

Host/VM Networking

You've got a couple of options. You can use macvtap to "bridge" a VM network interface to the local network, or you can make the host's ethernet interface part of a linux bridge. There have been some bugs in the kernel-side macvtap code that make it really slow, so you might want to use linux bridging.

For bridged networking, follow this one of these how-tos:

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/sect-Virtualization-Network_Configuration-Bridged_networking_with_libvirt.html http://wiki.libvirt.org/page/Networking#Bridged_networking_.28aka_.22shared_physical_device.22.29

Download thin-provisioning repo

$ git clone git://github.com/warewolf/thin-provisioning.git

Man in the middle (MITM) virtual machine

Virtual Hardware

  • 1st nic macvtap bridge to host's ethernet interface (bridged networking)
  • 2nd nic bridged to malware virtual network
  • 20G disk (virtio)

Software install

  • Fedora 17, minimal install.
  • For the love of god, turn off selinux.
$ vi /etc/sysconfig/selinux
SELINUX=disabled
  • Serial console:
$ vi /etc/default/grub
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200"
GRUB_CMDLINE_LINUX="... console=tty1 console=ttyS0,115200" -- and remove rhgb quiet
  • update grub config
$ grub2-mkconfig -o /boot/grub2/grub.cfg
  • Install useful packages: apache, dnsmasq, tcpdump, etc.
$ yum install -y httpd samba dnsmasq
$ for foo in nmb smb dnsmasq httpd; do systemctl enable $foo.service; systemctl start $foo.service;done

Analysis Windows VMs

  • 1 NIC connected to "malware" virtual network
  • Install analysis tools + applications
  • When finished, set VM disk images read only:
$ lvchange -p r vg_analysis/disk_image

Running Training Classes

  1. Create your cloned analysis VM w/ clone-vm.pl
$ clone-vm.pl --domain base_VM_image --clone class_a --cowpool ram
  1. Start the analysis VM
$ virsh start class_a
Domain class_a started
  1. Screencast / Record analysis VM
$ record-vnc.pl --domain class_a --fbs class_session-a --port 5980 --fullcontrol=instructor --viewonly=student
  1. Connect to VM host port 5980 (or VNC screen 80) and authenticate w/ instructor or student - make sure everyone connects "shared", otherwise vnc-reflector will close the connection immediately.