Monthly Archives: February 2013

A couple of caveats while setting up devstack on Fedora-18

It took more than a couple of attempts to get a running devstack instance on Fedora-18. I was following danpb’s notes. Here are a few tweaks I had to do get it working:

That’s the localrc file I had, before proceeding ahead:


$ cat localrc 
DESTDIR=$HOME/src/openstack
DATA_DIR=$DESTDIR/data

LOGFILE=$DATA_DIR/logs/stack.log
SCREEN_LOGDIR=$DATA_DIR/logs

# Switch to use QPid instead of RabbitMQ 
disable_service rabbit
enable_service qpid

# Replace with your primary interface name
HOST_IP_IFACE=eth0
PUBLIC_INTERFACE=eth0
VLAN_INTERFACE=eth0
FLAT_INTERFACE=eth0

# Replace with whatever password you wish to use
MYSQL_PASSWORD=testpwd
SERVICE_TOKEN=testpwd
SERVICE_PASSWORD=testpwd
ADMIN_PASSWORD=testpwd

# Pre-populate glance with a minimal image and a Fedora 17 image
IMAGE_URLS="http://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-uec.tar.gz,http://berrange.fedorapeople.org/images/2012-11-15/f17-x86_64-openstack-sda.qcow2"
$ 

After running stack.sh, you might see some failures — like tgtd service failing to start. The below is what I had in my tgtd.conf and stack.conf.


====
# rpm -q scsi-target-utils
scsi-target-utils-1.0.32-2.fc18.x86_64
====
$ cat /etc/tgt/tgtd.conf 
# The default config file
include /etc/tgt/targets.conf

# Config files from other packages etc.
#include /etc/tgt/conf.d/*.conf

# Explicitly import the stack.conf file
include /etc/tgt/conf.d/stack.conf
====
$ cat /etc/tgt/conf.d/stack.conf 
include /home/tuser1/src/openstack/data/cinder/volumes/*
$
====

And restart tgtd service:


$ systemctl restart tgtd.service  
$ systemctl status tgtd.service                                                                                                                                             
tgtd.service - tgtd iSCSI target daemon
          Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled)
          Active: active (running) since Wed 2013-02-27 06:28:13 EST; 5s ago
         Process: 26826 ExecStop=/usr/sbin/tgtadm --op delete --mode system (code=exited, status=0/SUCCESS)
         Process: 26822 ExecStop=/usr/sbin/tgt-admin --update ALL -c /dev/null (code=exited, status=0/SUCCESS)
         Process: 26820 ExecStop=/usr/sbin/tgtadm --op update --mode sys --name State -v offline (code=exited, status=0/SUCCESS)
         Process: 26888 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v ready (code=exited, status=0/SUCCESS)
         Process: 26882 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
         Process: 26880 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v offline (code=exited, status=0/SUCCESS)
        Main PID: 26879 (tgtd)
          CGroup: name=systemd:/system/tgtd.service
                  └─26879 /usr/sbin/tgtd -f

Feb 27 06:28:12 devstack-fedenglabfoobarcom tgtd[26879]: librdmacm: Warning: couldn't read ABI version.
Feb 27 06:28:12 devstack-fedenglabfoobarcom tgtd[26879]: librdmacm: Warning: assuming: 4
Feb 27 06:28:12 devstack-fedenglabfoobarcom tgtd[26879]: librdmacm: Fatal: unable to get RDMA device list
Feb 27 06:28:12 devstack-fedenglabfoobarcom tgtd[26879]: tgtd: iser_ib_init(3376) Failed to initialize RDMA; load kernel modules?
Feb 27 06:28:12 devstack-fedenglabfoobarcom tgtd[26879]: tgtd: work_timer_start(146) use timer_fd based scheduler
Feb 27 06:28:12 devstack-fedenglabfoobarcom tgtd[26879]: tgtd: bs_init(313) use signalfd notification
Feb 27 06:28:13 devstack-fedenglabfoobarcom systemd[1]: Started tgtd iSCSI target daemon.

Source the openrc file which will do few things: configure keystone authentication credentials, set up nova’s compute api version, etc:


$ . openrc 
$ 

Let’s list images in Glance:


$ glance image-list
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                            | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
| fa1adc47-1ff3-4394-99b3-e782a27762b0 | cirros-0.3.0-x86_64-uec         | ami         | ami              | 25165824  | active |
| b6137e19-fe48-40a1-81d8-ccbbae6bfc2b | cirros-0.3.0-x86_64-uec-kernel  | aki         | aki              | 4731440   | active |
| 3780a3bd-e3ad-4323-908e-8f22789e44e1 | cirros-0.3.0-x86_64-uec-ramdisk | ari         | ari              | 2254249   | active |
| 0c445018-f23c-4e5a-876b-912ea8f6c636 | f17-x86_64-openstack-sda        | qcow2       | bare             | 251985920 | active |
+--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
$ 

Add a new key pair:


$ nova keypair-add oskey > oskey.priv
$ ls
accrc    eucarc      exercises    extras.d  functions    lib      localrc  oskey.priv  rejoin-stack.sh  stackrc         stack.sh  tools
AUTHORS  exerciserc  exercise.sh  files     HACKING.rst  LICENSE  openrc   README.md   samples          stack-screenrc  tests     unstack.sh
$ 

Boot a flavor:


$ nova boot --key-name oskey --image f17-x86_64-openstack-sda  --flavor m1.tiny f17demo1                                                                           
+-----------------------------+--------------------------------------+
| Property                    | Value                                |
+-----------------------------+--------------------------------------+
| status                      | BUILD                                |
| updated                     | 2013-02-27T11:58:58Z                 |
| OS-EXT-STS:task_state       | scheduling                           |
| key_name                    | oskey                                |
| image                       | f17-x86_64-openstack-sda             |
| hostId                      |                                      |
| OS-EXT-STS:vm_state         | building                             |
| flavor                      | m1.tiny                              |
| id                          | 5eaf23b5-3558-4b07-8195-9d24734beced |
| security_groups             | [{u'name': u'default'}]              |
| user_id                     | f28638e924a645e1b69650b8acc99283     |
| name                        | f17demo1                             |
| adminPass                   | e9LZqPFdBvjq                         |
| tenant_id                   | 9b54eaecedb149de9d1d48c43210d463     |
| created                     | 2013-02-27T11:58:58Z                 |
| OS-DCF:diskConfig           | MANUAL                               |
| metadata                    | {}                                   |
| accessIPv4                  |                                      |
| accessIPv6                  |                                      |
| progress                    | 0                                    |
| OS-EXT-STS:power_state      | 0                                    |
| OS-EXT-AZ:availability_zone | None                                 |
| config_drive                |                                      |
+-----------------------------+--------------------------------------+
$ 

Now, list the running nova instances:

 
$ nova list
+--------------------------------------+----------+--------+----------+
| ID                                   | Name     | Status | Networks |
+--------------------------------------+----------+--------+----------+
| 5eaf23b5-3558-4b07-8195-9d24734beced | f17demo1 | ERROR  |          |
+--------------------------------------+----------+--------+----------+
$ 

Oh, it says ERROR, let’s take a look at logs.

NOTE1: Log files to see: screen-n-cpu.log, screen-n-sch.log (located in $HOME/src/openstack/devstack/data/logs).
NOTE2: Use ‘less -r’ to view the above log files, so they’re presented in a readable format. Otherwise all control characters (ESC, Carriage return) are displayed in caret notataion, and it’s unpleasant to look at.

So, I missed to add executable bit to $HOME directory, let’s set it:


$ sudo chmod -R +x /home/tuser1/
$ 

Let’s delete the old instance, boot a flavor (in this case, a tiny one – 512 MB memory), & list the running nova instance, and ensure it’s ACTIVE:

 
#----------#
$ nova delete 5eaf23b5-3558-4b07-8195-9d24734beced                                                                                                                 
$ 
#----------#
$ nova boot --key-name oskey --image f17-x86_64-openstack-sda  --flavor m1.tiny f17demo1
#----------#
$ nova list
+--------------------------------------+----------+--------+------------------+
| ID                                   | Name     | Status | Networks         |
+--------------------------------------+----------+--------+------------------+
| 5710ba30-499e-4c2a-872c-327217e229c6 | f17demo1 | ACTIVE | private=10.0.0.3 |
+--------------------------------------+----------+--------+------------------+
$ 
#----------#

List the running openstack services:

 
$ openstack-status 
== Support services ==
mysqld:                       active (disabled on boot)
libvirtd:                     active
qpidd:                        active
$ 

Advertisement

Leave a comment

Filed under Uncategorized

Nova’s way of using a disk image when it boots a guest for first time

Let’s see how Nova — the compute component of OpenStack — uses a disk image from the time it was initially imported into Glance (OpenStack’s repository for virtual machine disk images) till a new virtual machine instance is booted and running.

I started by downloading a Fedora-17 disk image from here
(Keep a note of the size of the image — 241 MB)


[tuser1@interceptor ~(keystone_user1)]$ ls -lash f17-x86_64-openstack-sda.qcow2 
241M -rw-rw-r--. 1 tuser1 tuser1 241M Jan 13  2012 f17-x86_64-openstack-sda.qcow2
[tuser1@interceptor ~(keystone_user1)]$ 

Import the above Fedora 17 disk image into Glance:


$ . ~/keystone_admin
$ glance image-create --name="fedora-17" --is-public=true \
--disk-format=qcow2 --container-format bare < f17-x86_64-openstack-sda.qcow2


(Note that I have to source Keystone admin credentials to list images from Glance).

Let’s list the current images in Glance:


$ . ~/keystone_tuser1
$ glance image-list
+--------------------------------------+-----------+-------------+------------------+------------+--------+
| ID                                   | Name      | Disk Format | Container
Format | Size       | Status |
+--------------------------------------+-----------+-------------+------------------+------------+--------+
| 1e6292f9-82bd-4cdb-969e-c863cb1c6692 | fedora-17 | qcow2       | bare
| 251985920  | active |
| acc4c853-9153-4e80-b3c8-e253451ae983 | rhel63    | qcow2       | bare
| 1074135040 | active |
+--------------------------------------+-----------+-------------+------------------+------------+--------+
$

List current running Nova instances :


$ nova list
+--------------------------------------+-----------+--------+-------------------+
| ID                                   | Name      | Status | Networks          |
+--------------------------------------+-----------+--------+-------------------+
| 08d616a9-87a1-4c0d-b986-7d6aa5ed6780 | fedora-t1 | ACTIVE | net1=ww.xx.yyy.zz |
| 3e487977-37e8-4f26-9443-d65ecbdf83c9 | fedora-t2 | ACTIVE | net1=ww.xx.yyy.zz |
| 48d9e518-a91f-48db-9d9b-965b243e7113 | fedora-t4 | ACTIVE | net1=ww.xx.yyy.zz |
+--------------------------------------+-----------+--------+-------------------+
$ 

Let’s also list the guests running using libvirt’s virsh :


$ sudo virsh list
 Id    Name                           State
----------------------------------------------------
 12    instance-0000000c              running
 13    instance-0000000d              running
 22    instance-00000012              running

$ 

Find the block device in use for one of the running instances to examine it further — instance-0000000c, in this case:


$ sudo virsh domblklist instance-0000000c
Target     Source
------------------------------------------------
vda        /var/lib/nova/instances/instance-0000000c/disk

$ 

Let’s get information about the disk in use by the above nova instance; specifically, find its backing file:


$ qemu-img info /var/lib/nova/instances/instance-0000000c/disk
image: /var/lib/nova/instances/instance-0000000c/disk
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 149M
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20

Now, get information about the “backing file” used by the overlay (used by the running Nova instance above):


$ qemu-img info /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
image: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 740M
$ 


It’s worth noting here that the original disk image initially uploaded into glance was a qcow2 image. However, the base image being used by Nova is a raw disk image.

From this, we can see: When booting a virtual machine instance for the first time, Nova does a few things:

  1. Make a copy of the original qcow2 disk image from Glance, convert it into a ‘raw’ sparse image & make it a base image (its location – /var/lib/nova/instances/_base) — Reason for turning non-raw to raw, refer “Background info” section below.
  2. Expands the size of the base image to 20GB (because, I used the m1.small flavour from Nova, when initially booting the image.)
  3. Use this base image & instantiate a copy on write overlay (qcow2) to boot the nova instance.

To demonstrate this post, I used Red Hat OpenStack Folsom, RHEL 6.4, single node, running these services — nova, glance, keystone, cinder; using KVM as the underlying hypervisor.

Background info:

[1] From nova’s git log, the conversion from non ‘raw’ images to ‘raw’ is tracked down to this commit

commit ff9d353b2f4fee469e530fbc8dc231a41f6fed84
Author: Scott Moser 
Date:   Mon Sep 19 16:57:44 2011 -0400

    convert images that are not 'raw' to 'raw' during caching to node
    
    This uses 'qemu-img' to convert images that are not 'raw' to be 'raw'.
    By doing so, it
     a.) refuses to run uploaded images that have a backing image reference
         (LP: #853330, CVE-2011-3147)
     b.) ensures that when FLAGS.use_cow_images is False, and the libvirt
         xml written specifies 'driver_type="raw"' that the disk referenced
         is also raw format. (LP: #837102)
     c.) removes compression that might be present to avoid cpu bottlenecks
         (LP: #837100)
    
    It does have the negative side affect of using more space in the case where
    the user uploaded a qcow2 (or other advanced image format) that could have
    been used directly by the hypervisor.  That could, later, be remedied by
    another 'qemu-img convert' being done to the "preferred" format of the
    hypervisor.

[2] Pádraig Brady pointed out (thank you !) this bz — https://bugs.launchpad.net/nova/+bug/932180 to note further reasons for converting images from non raw to raw.

As an aside, more informaion on disk image size allocation/ performance improvements upstream – https://blueprints.launchpad.net/nova/+spec/preallocated-images

UPDATE : Pádraig Brady writes in excellent detail about the life of an openstack libvirt image — describing Openstack — http://www.pixelbeat.org/docs/openstack_libvirt_images/

3 Comments

Filed under Uncategorized

Nested virtualization with KVM and Intel on Fedora-18

KVM nested virtualization with Intel finally works for me on Fedora-18. All three layers L0 (physical host) -> L1(regular-guest/guest-hypervisor) -> L2 (nested-guest) are running successfully as of writing this.

Previously, nested KVM virtualization on Intel was discussed here and here. This time on Fedora-18, I was able to successfully boot and use nested guest with resonable performance. (Although, I still have to do more formal tests to show some meaningful performance results).

Test setup information

Config info about the physical host, regular-guest/guest hypervisor and nested-guest. (All of them are Fedora-18; x86_64)

  • Physical Host (Host hypervisor/Bare metal)
    • Node info and some version info
      
      #--------------------#
      # virsh nodeinfo
      CPU model:           x86_64
      CPU(s):              4
      CPU frequency:       1995 MHz
      CPU socket(s):       1
      Core(s) per socket:  4
      Thread(s) per core:  1
      NUMA cell(s):        1
      Memory size:         10242692 KiB
      
      #--------------------#
      # cat /etc/redhat-release ; uname -r ; arch ; rpm -q qemu-kvm libvirt-daemon-kvm
      Fedora release 18 (Spherical Cow)
      3.6.7-5.fc18.x86_64
      x86_64
      qemu-kvm-1.3.0-9.fc18.x86_64
      libvirt-daemon-kvm-1.0.2-1.fc18.x86_64
      #
      #--------------------# 
      
  • Regualr Guest (Guest Hypervisor)
    • A 20GB qcow2 disk image w/ cache=’none’ enabled in the libvirt xml
    • 
      #--------------------# 
      # virsh nodeinfo
      CPU model:           x86_64
      CPU(s):              4
      CPU frequency:       1994 MHz
      CPU socket(s):       4
      Core(s) per socket:  1
      Thread(s) per core:  1
      NUMA cell(s):        1
      Memory size:         4049888 KiB
      #--------------------# 
      # cat /etc/redhat-release ; uname -r ; arch ; rpm -q qemu-kvm libvirt-daemon-kvm
      Fedora release 18 (Spherical Cow)
      3.6.10-4.fc18.x86_64
      x86_64
      qemu-kvm-1.2.2-6.fc18.x86_64
      libvirt-daemon-kvm-0.10.2.3-1.fc18.x86_64
      #--------------------# 
      
  • Nested Guest
    • Config: 2GB Memory; 2 vcpus; 6GB sparse qcow2 disk image

Setting up guest hypervisor and nested guest

Refer the notes linked above to get the nested guest up and running:

  • Create a regular guest/guest-hypervisor —
     # ./create-regular-f18-guest.bash 
  • Expose intel VMX extensions inside the guest-hypervisor by adding the cpu’ attribute to the regular-guest’s libvirt xml file
  • Shutdown regular guest, Redefine it ( virsh define /etc/libvirt/qemu/regular-guest-f18.xml ) ; Start the guest ( virsh start regular-guest-f18 )
  • Now, install virtualization packages inside the guest-hypervisor
  •  # yum install libvirt-daemon-kvm libvirt-daemon-config-network libvirt-daemon-config-nwfilter python-virtinst -y 
  • Start libvirtd service —
     # systemctl start libvirtd.service && systemctl status libvirtd.service  
  • Create a nested guest
     # ./create-nested-f18-guest.bash 

The scripts, and reference libvirt xmls I used for this demonstration are posted on github .

qemu-kvm invocation of bare-metal and guest hypervisors

qemu-kvm invocation of regular guest (guest hypervisor) indicating vmx extensions


# ps -ef | grep -i qemu-kvm | egrep -i 'regular-guest-f18|vmx'
qemu     15768     1 19 13:33 ?        01:01:52 /usr/bin/qemu-kvm -name regular-guest-f18 -S -M pc-1.3 -cpu core2duo,+vmx -enable-kvm -m 4096 -smp 4,sockets=4,cores=1,threads=1 -uuid 9a7fd95b-7b4c-743b-90de-fa186bb5c85f -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/regular-guest-f18.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/export/vmimgs/regular-guest-f18.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a6:ff:96,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Running virt-host-validate (it’s part of libvirt-client package) on bare-metal host indicting the host is configured to run KVM


# virt-host-validate 
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking for device /dev/kvm                                         : PASS
  QEMU: Checking for device /dev/vhost-net                                   : PASS
  QEMU: Checking for device /dev/net/tun                                     : PASS
   LXC: Checking for Linux >= 2.6.26                                         : PASS
# 

Networking Info
– The regular guest is using the bare metal host’s bridge device ‘br0’
– The nested guest is using libvirt’s default bridge ‘virbr0’

Caveat : If NAT’d networking is used on both bare metal & guest hypervisor, both, by default have 192.168.122.0/24 network subnet (unless explicitly changed), and will mangle the networking setup. Bridging on L0 (bare metal host), and NAT on L1 (guest hypervisor) avoids this.

Notes

  • Ensure to have serial console enabled in the both L1 and L2 guests, very handy for debugging. If you use the kickstart file mentioned here, it’s taken care of. The magic lines to be added to kernel cmd line are console=tty0 console=ttyS0,115200
  • Once the nested guest was created, I tried to set the hostname and it turns out for some reason ext4 has made the file system read-only :
    
    	#  hostnamectl set-hostname nested-guest-f18.foo.bar.com
    Failed to issue method call: Read-only file system
    

    The I see these I/O errors from /var/log/messages:

    
    .
    .
    .
    Feb 12 04:22:31 localhost kernel: [  724.080207] end_request: I/O error, dev vda, sector 9553368
    Feb 12 04:22:31 localhost kernel: [  724.080922] Buffer I/O error on device dm-1, logical block 33467
    Feb 12 04:22:31 localhost kernel: [  724.080922] Buffer I/O error on device dm-1, logical block 33468
    

    At this point, I tried to reboot the guest, only to be thrown at a dracut repair shell. I tried fsck a couple of times, & then tried to reboot the nested guest, to no avail. Then I force powered-off the nested-guest:

    #virsh destroy nested-guest-f18

    Now, it boots just fine — just while I was trying to get to the bottom of the I/O errors. I was discussing this behaviour with Rich Jones, and he suggested to try some more I/O activity inside the nested guest to see if I can trigger those errors again.

    
    # find / -exec md5sum {} \; > /dev/null
    # find / -xdev -exec md5sum {} \; > /dev/null
    

    After the above commands ran for more than 15 minutes, the I/O errors can’t be triggered any more,

  • A test for libugestfs program (from rwmj) would be on the host & first level guest to compare. The command needs to be ran several times and discard the first few results, to get a hot cache.
    
    # time guestfish -a /dev/null run' 
    
  • Another libguestfs test Rich suggested is to disable nested virt and measure guestfish running in the guest to find the speed-up from nested virtualization in contrast to pure software emulation.

Next, to run more useful work loads in these nested vmx guests.

1 Comment

Filed under Uncategorized