Tag Archives: nested virtualization

Capturing x86 CPU diagnostics

Sometime ago I learnt, from Paolo Bonzini (upstream KVM maintainer), about this little debugging utility – x86info (written by Dave Jones) which captures detailed information about CPU diagnostics — TLB, cache sizes, CPU feature flags, model-specific registers, etc. Take a look at its man page for specifics.

Install:

 $  yum install x86info -y 

Run it and capture the output in a file:

 $  x86info -a 2>&1 | tee stdout-x86info.txt  

As part of debugging KVM-based nested virtualization issues, here I captured x86info of L0 (bare metal, Intel Haswell), L1 (guest hypervisor), L2 (nested guest, running on L1).

Advertisement

Leave a comment

Filed under Uncategorized

Nested Virtualization — KVM, Intel, with VMCS Shadowing

[Previous installments on Nested Virtualization with KVM and Intel.]

This is part of some recent testing that I’ve been doing with upstream KVM (for 3.10.1). The threads linked here has initial tests bench-marking kernel compile (with make defconfig, a default config file) times in L2. And some minimal guestfish appliance start-up timings in L1.

Some details:

  • Setup information to test with VMCS (Virtual Machine Control Structure) Shadowing. In brief, VMCS Shadowing — a processor specific feature — as described upstream, can reduce the overhead of nested virtualization by reducing the number of VMExits from L1 to L0.
  • Simple scripts used to create L1 and L2.
  • Libvirt XMLs of L1, L2 guests, for reference.

The gritty details of reasons for VMExits are described in Intel architecture manuals, Volume 3b, APPENDIX 1.

1 Comment

Filed under Uncategorized

Nested virtualization with KVM and Intel on Fedora-18

KVM nested virtualization with Intel finally works for me on Fedora-18. All three layers L0 (physical host) -> L1(regular-guest/guest-hypervisor) -> L2 (nested-guest) are running successfully as of writing this.

Previously, nested KVM virtualization on Intel was discussed here and here. This time on Fedora-18, I was able to successfully boot and use nested guest with resonable performance. (Although, I still have to do more formal tests to show some meaningful performance results).

Test setup information

Config info about the physical host, regular-guest/guest hypervisor and nested-guest. (All of them are Fedora-18; x86_64)

  • Physical Host (Host hypervisor/Bare metal)
    • Node info and some version info
      
      #--------------------#
      # virsh nodeinfo
      CPU model:           x86_64
      CPU(s):              4
      CPU frequency:       1995 MHz
      CPU socket(s):       1
      Core(s) per socket:  4
      Thread(s) per core:  1
      NUMA cell(s):        1
      Memory size:         10242692 KiB
      
      #--------------------#
      # cat /etc/redhat-release ; uname -r ; arch ; rpm -q qemu-kvm libvirt-daemon-kvm
      Fedora release 18 (Spherical Cow)
      3.6.7-5.fc18.x86_64
      x86_64
      qemu-kvm-1.3.0-9.fc18.x86_64
      libvirt-daemon-kvm-1.0.2-1.fc18.x86_64
      #
      #--------------------# 
      
  • Regualr Guest (Guest Hypervisor)
    • A 20GB qcow2 disk image w/ cache=’none’ enabled in the libvirt xml
    • 
      #--------------------# 
      # virsh nodeinfo
      CPU model:           x86_64
      CPU(s):              4
      CPU frequency:       1994 MHz
      CPU socket(s):       4
      Core(s) per socket:  1
      Thread(s) per core:  1
      NUMA cell(s):        1
      Memory size:         4049888 KiB
      #--------------------# 
      # cat /etc/redhat-release ; uname -r ; arch ; rpm -q qemu-kvm libvirt-daemon-kvm
      Fedora release 18 (Spherical Cow)
      3.6.10-4.fc18.x86_64
      x86_64
      qemu-kvm-1.2.2-6.fc18.x86_64
      libvirt-daemon-kvm-0.10.2.3-1.fc18.x86_64
      #--------------------# 
      
  • Nested Guest
    • Config: 2GB Memory; 2 vcpus; 6GB sparse qcow2 disk image

Setting up guest hypervisor and nested guest

Refer the notes linked above to get the nested guest up and running:

  • Create a regular guest/guest-hypervisor —
     # ./create-regular-f18-guest.bash 
  • Expose intel VMX extensions inside the guest-hypervisor by adding the cpu’ attribute to the regular-guest’s libvirt xml file
  • Shutdown regular guest, Redefine it ( virsh define /etc/libvirt/qemu/regular-guest-f18.xml ) ; Start the guest ( virsh start regular-guest-f18 )
  • Now, install virtualization packages inside the guest-hypervisor
  •  # yum install libvirt-daemon-kvm libvirt-daemon-config-network libvirt-daemon-config-nwfilter python-virtinst -y 
  • Start libvirtd service —
     # systemctl start libvirtd.service && systemctl status libvirtd.service  
  • Create a nested guest
     # ./create-nested-f18-guest.bash 

The scripts, and reference libvirt xmls I used for this demonstration are posted on github .

qemu-kvm invocation of bare-metal and guest hypervisors

qemu-kvm invocation of regular guest (guest hypervisor) indicating vmx extensions


# ps -ef | grep -i qemu-kvm | egrep -i 'regular-guest-f18|vmx'
qemu     15768     1 19 13:33 ?        01:01:52 /usr/bin/qemu-kvm -name regular-guest-f18 -S -M pc-1.3 -cpu core2duo,+vmx -enable-kvm -m 4096 -smp 4,sockets=4,cores=1,threads=1 -uuid 9a7fd95b-7b4c-743b-90de-fa186bb5c85f -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/regular-guest-f18.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/export/vmimgs/regular-guest-f18.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a6:ff:96,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Running virt-host-validate (it’s part of libvirt-client package) on bare-metal host indicting the host is configured to run KVM


# virt-host-validate 
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking for device /dev/kvm                                         : PASS
  QEMU: Checking for device /dev/vhost-net                                   : PASS
  QEMU: Checking for device /dev/net/tun                                     : PASS
   LXC: Checking for Linux >= 2.6.26                                         : PASS
# 

Networking Info
– The regular guest is using the bare metal host’s bridge device ‘br0’
– The nested guest is using libvirt’s default bridge ‘virbr0’

Caveat : If NAT’d networking is used on both bare metal & guest hypervisor, both, by default have 192.168.122.0/24 network subnet (unless explicitly changed), and will mangle the networking setup. Bridging on L0 (bare metal host), and NAT on L1 (guest hypervisor) avoids this.

Notes

  • Ensure to have serial console enabled in the both L1 and L2 guests, very handy for debugging. If you use the kickstart file mentioned here, it’s taken care of. The magic lines to be added to kernel cmd line are console=tty0 console=ttyS0,115200
  • Once the nested guest was created, I tried to set the hostname and it turns out for some reason ext4 has made the file system read-only :
    
    	#  hostnamectl set-hostname nested-guest-f18.foo.bar.com
    Failed to issue method call: Read-only file system
    

    The I see these I/O errors from /var/log/messages:

    
    .
    .
    .
    Feb 12 04:22:31 localhost kernel: [  724.080207] end_request: I/O error, dev vda, sector 9553368
    Feb 12 04:22:31 localhost kernel: [  724.080922] Buffer I/O error on device dm-1, logical block 33467
    Feb 12 04:22:31 localhost kernel: [  724.080922] Buffer I/O error on device dm-1, logical block 33468
    

    At this point, I tried to reboot the guest, only to be thrown at a dracut repair shell. I tried fsck a couple of times, & then tried to reboot the nested guest, to no avail. Then I force powered-off the nested-guest:

    #virsh destroy nested-guest-f18

    Now, it boots just fine — just while I was trying to get to the bottom of the I/O errors. I was discussing this behaviour with Rich Jones, and he suggested to try some more I/O activity inside the nested guest to see if I can trigger those errors again.

    
    # find / -exec md5sum {} \; > /dev/null
    # find / -xdev -exec md5sum {} \; > /dev/null
    

    After the above commands ran for more than 15 minutes, the I/O errors can’t be triggered any more,

  • A test for libugestfs program (from rwmj) would be on the host & first level guest to compare. The command needs to be ran several times and discard the first few results, to get a hot cache.
    
    # time guestfish -a /dev/null run' 
    
  • Another libguestfs test Rich suggested is to disable nested virt and measure guestfish running in the guest to find the speed-up from nested virtualization in contrast to pure software emulation.

Next, to run more useful work loads in these nested vmx guests.

1 Comment

Filed under Uncategorized

Nested Virtualization with Intel — take-2 with Fedora-17

My previous attempt with Fedora 16 to create a nested virtual guest on an Intel CPU was only 90% success,. I just gave a retry with Fedora 17, and the newest available virt packages from virt-preview repository.

I posted some notes, configurations of physical host, regular guest and nested guest, and the scripts I used on my fedora people page

Few observations:

  • Regular guest(L1) created just fine on the physical host(L0). No news here, this is expected.
  • Shutting down the regular guest causes ‘virsh’ to hang with a segfault. To avoid this, I have to restart libvirtd daemon, and then start the guest. I posted some more details to fedora virt list here and here
  • Now, when I try to create the ‘nested guest'(L2), I don’t see any progress on the serial console once it attempts to retrieve initrd and vmlinuz :

# ./create-nested-guest.bash 
Creating qcow2 disk image..
Formatting '/export/vmimgs/nested-guest-f17.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 preallocation='metadata' 
11G -rw-r--r--. 1 root root 11G Jul 28 10:47 /export/vmimgs/nested-guest-f17.qcow2

Starting install...
Retrieving file .treeinfo...                                                                                                 | 1.8 kB     00:00 !!! 
Retrieving file vmlinuz...                                                                                                   | 8.9 MB     00:00 !!! 
Retrieving file initrd.img...                                                                                                |  47 MB     00:00 !!! 
Creating domain...                                                                                                           |    0 B     00:00     

I tried to view,using less or tail, system/libvirt logs, check status of libvirtd daemon, or try virsh list in the ‘regular guest’ to no avail. Those commands are just hung on the stdout.

A little bit more detail in a text file here.

Meanwhile, here’s the version detail. I used the same kernel, qemu-kvm, libvirt on both Physical host and Regular guest:


[root@moon ~]# uname -r ; rpm -q qemu-kvm libvirt 
3.4.6-2.fc17.x86_64
qemu-kvm-1.1.0-9.fc17.x86_64
libvirt-0.9.13-3.fc17.x86_64
[root@moon ~]# 

I’m still investigating, will update here, once I have more information.

4 Comments

Filed under Uncategorized

Nested Virtualization with KVM and AMD

After my previous attempt the other day to create a nested-guest(kvm on kvm) with Intel arch, I got hold of an AMD server machine with virt-extensions enabled and gave it a whirl. This went slightly smoother than the Intel attempt.

Some config info about the physical host, regular-guest and nested-guest. (All of them are Fedora-16; x86_64)

  • Physical Host (Host hypervisor/Bare metal)
    • 
      [root@phy-host-amd]# virsh nodeinfo
      CPU model:           x86_64
      CPU(s):              16
      CPU frequency:       2000 MHz
      CPU socket(s):       2
      Core(s) per socket:  8
      Thread(s) per core:  1
      NUMA cell(s):        1
      Memory size:         8173352 kB
      
  • Regualr Guest (Or Guest Hypervisor)
    • Config: 4GB Memory; 6 vcpus; 22GB Raw disk image w/ cache=’none’ enabled in the libvirt xml
  • Nested Guest
    • Config: 2GB Memory; 3 vcpus; 10G Raw disk image

Ensure nesting is enabled on the physical host

Let’s ensure kvm_amd kernel module is enabled with ‘nested’ virt.


[root@phy-host-amd ~]# modinfo kvm_amd | grep -i nested
parm:           nested:int
[root@phy-host-amd ~]# 

[root@phy-host-amd ~]# cat /sys/module/kvm_amd/parameters/nested
1
[root@phy-host-amd ~]# 

[root@phy-host-amd ~]# systool -m kvm_amd -v   | grep -i nested
    nested              = "1"
[root@phy-host-amd ~]# 

CAVEAT: To make life a little easier, I configured bridged networking on the physical host to ensure our regular-guest gets a bridged IP; and later, nested-guest gets a NATed IP. I’m noting it here because, the physical host initially had no bridging. The default libvirt bridge virbr0 has 192.168.122.0/24 IP space. So once we set up the regular-guest(or guest-hypervisor), we’ll end up having the same IP space. I tried to fix this prob. by creating another ‘persistent’ libvirt network interface and enabled autostart of it. [virsh net-add; virsh net-define; virsh net-autostart ]. But, it wasn’t elegant and messed up networks on reboot.

Set up the guest hypervisor
Create a minimal regular-guest using virt-install . The one I used is posted here

Now, add the cpu attribute to the regular-guest’s libvirt xml to expose AMD’s svm instructions, which comes with Opteron_G3 model .

Edit the xml using virsh:

# virsh edit regualr-guest 

(which will also define the xml)

Here is the attribute to be added to the guest hypervisor’s libvirt xml:

   <cpu>
      <arch>x86_64</arch>
      <model>Opteron_G3</model>
      <vendor>AMD</vendor>
      <topology sockets='2' cores='8' threads='1'/>
      <feature name='wdt'/>
      <feature name='skinit'/>
      <feature name='osvw'/>
      <feature name='3dnowprefetch'/>
      <feature name='cr8legacy'/>
      <feature name='extapic'/>
      <feature name='cmp_legacy'/>
      <feature name='3dnow'/>
      <feature name='3dnowext'/>
      <feature name='pdpe1gb'/>
      <feature name='fxsr_opt'/>
      <feature name='mmxext'/>
      <feature name='ht'/>
      <feature name='vme'/>
    </cpu>

And, restarted the regular-guest, so that it boots w/ the -cpuflag which the AMD virt extensions:


[root@phy-host-amd ~]# ps -ef | grep -i qemu-kvm
qemu     26677     1 14 10:39 ?        00:00:30 /usr/bin/qemu-kvm -S -M pc-0.14 -cpu phenom,+wdt,+skinit,+osvw,+3dnowprefetch,+misalignsse,+sse4a,+abm,+cr8legacy,+extapic,+cmp_legacy,+lahf_lm,+rdtscp,+pdpe1gb,+popcnt,+cx16,+ht,+vme -enable-kvm -m 4096 -smp 6,sockets=2,cores=8,threads=1 -name regular-guest -uuid 8f6a4478-496b-51d8-2de2-ff7fdb964af3 -nographic -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/regular-guest.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -drive file=/var/lib/libvirt/images/regular-guest.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:5f:c6:5f,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Now, let’s fetch the IP of the regular-guest using virt-cat


[root@phy-host-amd ~]# virsh list
 Id Name                 State
----------------------------------
  5 regular-guest        running
[root@phy-host-amd ~]# 
[root@phy-host-amd ~]# virt-cat regular-guest /var/log/messages | grep 'dhclient.*bound to'
Jan 17 10:13:06 dhcpyy-zz dhclient[732]: bound to ww.xx.yy.zz -- renewal in 32578 seconds.

(Note: ‘ww.xx.yy.zz’ above will be a bridged IP address)

Create the nested guest
Now. install virt-packages in the regular-guest. Also, let’s check if the /dev/kvm char device is exposed in the regular-guest ; and start the libvirtd service.


[root@regular-guest ~]# file /dev/kvm 
/dev/kvm: character special
[root@regular-guest ~]# systemctl status libvirtd.service 
libvirtd.service - LSB: daemon for libvirt virtualization API
          Loaded: loaded (/etc/rc.d/init.d/libvirtd)
          Active: active (running) since Tue, 17 Jan 2012 10:49:25 -0500; 5s ago
         Process: 1440 ExecStart=/etc/rc.d/init.d/libvirtd start (code=exited, status=0/SUCCESS)
        Main PID: 1448 (libvirtd)
          CGroup: name=systemd:/system/libvirtd.service
                  ├ 1448 libvirtd --daemon
                  └ 1501 /usr/sbin/dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --exce...

Proceed with installing a minimal F16 nested-guest w/ virt-install. The script I used is here

Debugging note: Once the guest install is finished, fix the serial console access by disabling plymouth-service using this workaround. This will let us login via virsh serial console(to get kernel and boot messages) w/o any line breaks while entering credentials:

 # ln -s /dev/null /etc/systemd/system/plymouth-start.service

Get the (NATed) IP of the nested-guest. (Also, grepped for the qemu-kvm command-line of the nested-guest.)


[root@regular-guest ~]# virsh list
 Id Name                 State
----------------------------------
  2 nested-guest         running
[root@regular-guest ~]# ps -ef | grep qemu-kvm
qemu      2245     1  2 Jan17 ?        00:20:11 /usr/bin/qemu-kvm -S -M pc-0.14 -enable-kvm -m 2048 -smp 3,sockets=3,cores=1,threads=1 -name nested-guest -uuid 2aae2ab5-ddb6-2585-aa16-7fe97296f34b -nographic -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/nested-guest.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -drive file=/var/lib/libvirt/images/nested-guest.img,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0e:4e:53,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

[root@regular-guest ~]# virt-cat nested-guest /var/log/messages | grep 'dhclient.*bound to'                                                            
Jan 17 11:08:30 localhost dhclient[721]: bound to 192.168.122.220 -- renewal in 1393 seconds.
[root@regular-guest ~]# 

SSh into the nested-guest, install virt-what package and run to see if we’re on a hypervisor


[root@localhost ~]# cat /etc/fedora-release 
Fedora release 16 (Verne)
[root@localhost ~]# ifconfig eth0 | grep inet
          inet addr:192.168.122.220  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe0e:4e53/64 Scope:Link
[root@localhost ~]# 
[root@localhost ~]# virt-what 
kvm

Wooo!! so we’re on an OS which is inside an OS which is inside an OS.

15 Comments

Filed under Uncategorized

Nested Virtualization with KVM Intel

Some context: In regular virtualization, your physical linux host is the hypervisor, and runs multiple operating systems. Nested Virtualization let’s you run a guest inside a regular guest(essentially a Guest hypervisor).For AMD there is nested-support available since a while, and some people reported success w/ nesting KVM guests. For Intel arch., there is support available recently, an year-ish, and some in progress work, so thought I’d give it a whirl when Adam Young started discussion about it in context of openstack project.

Some of the common use-cases for that are being discussed for nested-virtualization
– For instance, a cloud user gets a beefy, Regualar Guest(which she completely controls). Now, this user can turn regular guest into a hypervisor, and can cheerfully run/manage multiple guests for developing or testing w/o the hassle and intervention of the cloud provider.
– Possibility of having a many instances of virtualization setup (hypervisor and its guests) on one single Bare metal.
– Ability to debug and test hypervisor software

I have immediate access to a moderately beefy Intel hardware, and rest of the post is based on Intel’s CPU virt extensions. Before proceeding, let’s settle on some terminology for clarity:

  • Physical Host (Host hypervisor/Bare metal)
    • Config: Intel(R) Xeon(R) CPU(4 cores/socket); 10GB Memory; CPU Freq – 2GHz; Running latest Fedora-16(Minimal foot-print, @core only with Virt pkgs;x86_64; kernel-3.1.8-2.fc16.x86_64
  • Regualr Guest (Or Guest Hypervisor)
    • Config: 4GB Memory; 4vCPU; 20GB Raw disk image with cache =’none’ to have decent I/O; Minimal, @core F16; And same virt-packages as Physical Host; x86_64
  • Nested Guest (Guest installed inside the Regular Guest)
    • Config: 2GB Memory; 1vCPU; Minimal(@core only) F16; x86_64

Enabling Nesting on the Physical Host

Node Info of the Physical Host.

 
# virsh nodeinfo
CPU model:           x86_64
CPU(s):              4
CPU frequency:       1994 MHz
CPU socket(s):       1
Core(s) per socket:  4
Thread(s) per core:  1
NUMA cell(s):        1
Memory size:         10242864 kB

Let us first ensure kvm_intel kernel module has nesting enabled. By default, it’s disabled for Intel arch[ but enabled for AMD — SVM (secure virtual machine) extensions arch.]

 
# modinfo kvm_intel | grep -i nested
parm:           nested:bool
# 

And, we need to pass this kvm-intel.nested=1 on kernel commandline while rebooting the host to enable nesting for the Intel KVM kernel module. Which can be verified after boot by doing:

 
# cat /sys/module/kvm_intel/parameters/nested 
Y
# systool -m kvm_intel -v   | grep -i nested
    nested              = "Y"
# 

Or alternatively, Adam Young identified that nesting can be enabled by adding this directive options kvm-intel nested=y to the end of /etc/modprobe.d/dist.conf file and reboot the host so it persists.

Set up the Regular Guest(or Guest hypervisor)
Install a regular guest using virt-install or oz tool or any other preferred way. I made a quick script here. And ensure to have cache=’none’ in the disk attribute of the Guest Hypervisor’s xml file. (observation: Install via virt-install tool didn’t seem have this option picked by default.) Here is the ‘drive’ attribute libvirt xml snippet:

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/regular-guest.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>

Now, let’s try to enable Intel VMX(Virtual Machine Extensions) in the regular guest’s CPU. We can do it by running the below on the Physical host(aka Host Hypervisor), and adding the ‘cpu’ attribute to the regular-guest’s libvirt xml file, and start the guest.

# virsh  capabilities | virsh cpu-baseline /dev/stdin 
<cpu match='exact'>
  <model>Penryn</model>
  <vendor>Intel</vendor>
  <feature policy='require' name='dca'/>
  <feature policy='require' name='xtpr'/>
  <feature policy='require' name='tm2'/>
  <feature policy='require' name='vmx'/>
  <feature policy='require' name='ds_cpl'/>
  <feature policy='require' name='monitor'/>
  <feature policy='require' name='pbe'/>
  <feature policy='require' name='tm'/>
  <feature policy='require' name='ht'/>
  <feature policy='require' name='ss'/>
  <feature policy='require' name='acpi'/>
  <feature policy='require' name='ds'/>
  <feature policy='require' name='vme'/>
</cpu>

The o/p of the above cmd has a variety of options. Since we need only vmx extensions, I tried the simple way by adding to the regular-guest’s libvirt xml(virsh edit ..) and started it.

<cpu match='exact'>
  <model>core2duo</model>
 <feature policy='require' name='vmx'/>
</cpu>

Thanks to Jiri Denemark for the above hint. Also note that, there is a very detailed and informative post from Dan P Berrange on host/guest CPU models in libvirt.

As we enabled vmx in the guest-hypervisor, let’s confirm that vmx is exposed in the emulated CPU by ensuring qemu-kvm is invoked with -cpu core2duo,+vmx :


[root@physical-host ~]# ps -ef | grep qemu-kvm
qemu     17102     1  4 22:29 ?        00:00:34 /usr/bin/qemu-kvm -S -M pc-0.14 
-cpu core2duo,+vmx -enable-kvm -m 3072
-smp 3,sockets=3,cores=1,threads=1 -name f16test1 
-uuid f6219dbd-f515-f3c8-a7e8-832b99a24b5d -nographic -nodefconfig 
-nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/f16test1.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-drive file=/export/vmimgs/f16test1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none
-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=21,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:e6:cc:4e,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Now, let’s attempt to create a nested guest

Here comes the more interesting part, the nested-guest config. will be 2G RAM; 1vcpu; 8GB virtual disk. And let’s invoke a virt-install cmdline with a minimal kickstart install:


[root@regular-guest ~]# virt-install --connect=qemu:///system \
    --network=bridge:virbr0 \
    --initrd-inject=/root/fed.ks \
   --extra-args=ks=file:/fed.ks console=tty0 console=ttyS0,115200 serial rd_NO_PLYMOUTH \
    --name=nested-guest --disk path=/var/lib/libvirt/images/nested-guest.img,size=6 \
    --ram 2048 \
    --vcpus=1 \
    --check-cpu \
    --hvm \
    --location=http://download.foo.bar.com/pub/fedora/linux/releases/16/Fedora/x86_64/os/
    --nographics

Starting install...
Retrieving file .treeinfo...                                                                                                 | 1.7 kB     00:00 ... 
Retrieving file vmlinuz...                                                                                                   | 7.9 MB     00:08 ... 
Retrieving file initrd.img...                               28% [==============                                   ] 647 kB/s |  38 MB     02:25 ETA 

virt-install proceeds fine(to a certain extent), doing all regular things like getting access to network, create devices, create file-systems, dep checks performed, and finally package install proceeds:


Welcome to Fedora for x86_64



     ┌─────────────────────┤ Package Installation ├──────────────────────┐
     │                                                                   │
     │                                                                   │
     │                                 24%                               │
     │                                                                   │
     │                   Packages completed: 52 of 390                   │
     │                                                                   │
     │ Installing glibc-common-2.14.90-14.x86_64 (112 MB)                │
     │ Common binaries and locale data for glibc                         │
     │                                                                   │
     │                                                                   │
     │                                                                   │
     └───────────────────────────────────────────────────────────────────┘

And now, it’s stuck like that for ever. Doesn’t budge, trying to install pkgs for eternity. Let’s try to see what’s the state of the guest in a seperate terminal


[root@regular-guest ~]# virsh list
 Id Name                 State
----------------------------------
  1 nested-guest         paused

[root@regular-guest ~]# 
[root@regular-guest ~]#  virsh domstate nested-guest --reason
paused (unknown)

[root@regular-guest ~]# 

So our nested-guest seems to be paused, And package install on the nested-guest’s serial console is still hung. I gave up at this point. Need to try if I can get any helpful info w/ virt-dmesg tool aor any other ways to debug this further.

Just to note, there is enough disk space and memory on the ‘regular-guest’, so that case is ruled out here. And, I tried to destroy the broken nested-guest, and attempted to create a fresh one(repeated twice). Still no dice.

So not much luck yet with Intel arch, I’d have to try on an AMD machine.

UPDATE(on Intel arch): After trying a couple of times, I was finally able to ssh to the nested guest, but, after a reboot, the nested-guest loses the IP rendering it inaccessible.(Info: the regular-guest has a bridged IP, and nested-guest has a NATed IP) . And I couldn’t login via serial-console, as it’s broken due to a regression(which has a workaround). Also, refer to comments below for further discussion on NATed networking caveats.
UPDATE2: The correct syntax to be added to /etc/modprobe.conf/dist.conf is options kvm-intel nested=y

19 Comments

Filed under Uncategorized