Monthly Archives: March 2013

Multiple ways to access QEMU Machine Protocol (QMP)

Once QEMU is built, to get a finer understanding of it, or even for plain old debugging, having familiarity with QMP (QEMU Monitor Protocol) is quite useful. QMP allows applications — like libvirt — to communicate with a running QEMU’s instance. There are a few different ways to access the QEMU monitor to query the guest, get device (eg: PCI, block, etc) information, modify the guest state (useful to understand the block layer operations) using QMP commands. This post discusses a few aspects of it.

Access QMP via libvirt’s qemu-monitor-command
Libvirt had this capability for a long time, and this is the simplest. It can be invoked by virsh — on a running guest, in this case, called ‘devstack’:

$ virsh qemu-monitor-command devstack \
--pretty '{"execute":"query-kvm"}'
{
    "return": {
        "enabled": true,
        "present": true
    },
    "id": "libvirt-8"
}

In the above example, I ran the simple command query-kvm which checks if (1) the host is capable of running KVM (2) and if KVM is enabled. Refer below for a list of possible ‘qeury’ commands.

QMP via telnet
To access monitor via any other way, we need to have qemu instance running in control mode, via telnet:

$ ./x86_64-softmmu/qemu-system-x86_64 \
  --enable-kvm -smp 2 -m 1024 \
  /export/images/el6box1.qcow2 \
  -qmp tcp:localhost:4444,server --monitor stdio
QEMU waiting for connection on: tcp::127.0.0.14444,server
VNC server running on `127.0.0.1:5900'
QEMU 1.4.50 monitor - type 'help' for more information
(qemu)

And, from a different shell, connect to that listening port 4444 via telnet:

$ telnet localhost 4444

Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 1}, "package": ""}, "capabilities": []}}

We have to first enable QMP capabilities. This needs to be run before invoking any other commands, do:

{ "execute": "qmp_capabilities" }

QMP via unix socket
First, invoke the qemu binary in control mode using qmp, and create a unix socket as below:

$ ./x86_64-softmmu/qemu-system-x86_64 \
  --enable-kvm -smp 2 -m 1024 \
  /export/images/el6box1.qcow2 \
  -qmp unix:./qmp-sock,server --monitor stdio
QEMU waiting for connection on: unix:./qmp-sock,server

A few different ways to connect to the above qemu instance running in control mode, vi QMP:

  1. Firstly, via nc :

    $ nc -U ./qmp-sock
    {"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 1}, "package": ""}, "capabilities": []}}
    
  2. But, with the above, you have to manually enable the QMP capabilities, and type each command in JSON syntax. It’s a bit cumbersome, & no history of commands typed is saved.

  3. Next, a more simpler way — a python script called qmp-shell is located in the QEMU source tree, under qemu/scripts/qmp/qmp-shell, which hides some details — like manually running the qmp_capabilities.

    Connect to the unix socket using the qmp-shell script:

    $ ./qmp-shell ../qmp-sock 
    Welcome to the QMP low-level shell!
    Connected to QEMU 1.4.50
    
    (QEMU) 
    

    Then, just hit the key, and all the possible commands would be listed. To see a list of query commands:

    (QEMU) query-<TAB>
    query-balloon               query-commands              query-kvm                   query-migrate-capabilities  query-uuid
    query-block                 query-cpu-definitions       query-machines              query-name                  query-version
    query-block-jobs            query-cpus                  query-mice                  query-pci                   query-vnc
    query-blockstats            query-events                query-migrate               query-status                
    query-chardev               query-fdsets                query-migrate-cache-size    query-target                
    (QEMU) 
    
  4. Finally, we can also acess the unix socket using socat and rlwrap. Thanks to upstream qemu developer Markus Armbruster for this hint.

    Invoke it this way, also execute a couple of commands — qmp_capabilities, and query-kvm, to view the response from the server.

    $ rlwrap -H ~/.qmp_history \
      socat UNIX-CONNECT:./qmp-sock STDIO
    {"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 1}, "package": ""}, "capabilities": []}}
    {"execute":"qmp_capabilities"}
    {"return": {}}
    { "execute": "query-kvm" }
    {"return": {"enabled": true, "present": true}}
    

    Where, qmp_history contains recently ran QMP commands in JSON syntax. And rlwrap adds decent editing capabilities, recursive search & history. So, once you run all your commands, the ~/.qmp_history has a neat stack of all the QMP commands in JSON syntax.

    For instance, this is what my ~/.qmp_history file contains as I write this:

    $ cat ~/.qmp_history
    { "execute": "qmp_capabilities" }
    { "execute": "query-version" }
    { "execute": "query-events" }
    { "execute": "query-chardev" }
    { "execute": "query-block" }
    { "execute": "query-blockstats" }
    { "execute": "query-cpus" }
    { "execute": "query-pci" }
    { "execute": "query-kvm" }
    { "execute": "query-mice" }
    { "execute": "query-vnc" }
    { "execute": "query-spice " }
    { "execute": "query-uuid" }
    { "execute": "query-migrate" }
    { "execute": "query-migrate-capabilities" }
    { "execute": "query-balloon" }
    

To illustrate, I ran a few query commands (noted above) which provides an informative response from the server — no change is done to the state of the guest — so these can be executed safely.

I personally prefer the libvirt way, & accessing via unix socket with socat & rlwrap.

NOTE: To try each of the above variants, fisrst quit — type quit on the (qemu) shell — the qemu instance running in control mode, reinvoke it, then access it via one of the 3 different ways.

Advertisement

4 Comments

Filed under Uncategorized

Debugging nova — a small illustration with pdb

The other day, I was testing non-ascii characters with nova commands. I started with adding an ssh key to nova with a non-ascii name, not realizing key-pair names won’t support them. This is how we ended up figuring it after a small debugging session with Flavio.

First, let’s try to add an ssh keypair & attempt to provide a non-ascii name:

$ nova keypair-add --pub_key ~/.ssh/id_rsa.pub äçë
ERROR: Keypair data is invalid (HTTP 400) (Request-ID: req-5a1a2683-d658-417b-a805-91346d5202dc)

Note – supplying --debug provides more info on the stdout. However, we’re interested to find out the root cause.

Let’s find out why the key-pair data is invalid by stepping through code using pdb. To start, stop the nova-api service, and ensure none of its processes are running :

$ service openstack-nova-api stop
$ ps aux | grep -i nova-api

Ensure to have the config attribute osapi_compute_workers set to 1 — I had it set to 5 while testing rhbz-81349 — so they won’t run in parallel, in /etc/nova/nova.conf :

$ grep osapi_compute_workers \
  /etc/nova/nova.conf | grep -v ^$ | grep -v ^# \
  osapi_compute_workers=1

Let’s set the pdb trace — import pdb; pdb.set_trace() — in the source code to break into the debugger — /usr/lib/python2.6/site-packages/nova/compute/api.py

Code context — I’m setting the breakpoint just after the self-explanatory function _validate_keypair_name :

.
.
class KeypairAPI(base.Base):
    """Sub-set of the Compute Manager API for managing key pairs."""
    def __init__(self, **kwargs):
        super(KeypairAPI, self).__init__(**kwargs)

    def _validate_keypair_name(self, context, user_id, key_name):

        import pdb; pdb.set_trace()       

        safechars = "_- " + string.digits + string.ascii_letters
        clean_value = "".join(x for x in key_name if x in safechars)
.
.

Now, invoke the nova-api service manually:

$ /usr/bin/nova-api --config-file \
  /usr/share/nova/nova-dist.conf --config-file \
  /etc/nova/nova.conf --logfile /var/log/nova/api.log

From a different terminal, again, try to create a keypair with non-ascii characters — this time, it’ll let us break into the debugger

$ nova keypair-add --pub-key ~/.ssh/id_rsa.pub äçë

Now, from shell from where we invoked the nova-api server daemon, a debugger Pdb would be waiting, to step through the code, do:

  • type n — which continues execution until the next line in the current function is reached
  • print the value safechars — indicates what values are considered safe — which is what we’re trying to find out
  • quit the debugger
.
.
.
2013-03-26 19:56:26 71243 DEBUG nova.utils [-] Reloading cached file /etc/nova/policy.json read_cached_file /usr/lib/python2.6/site-packages/nova/uti1
2013-03-26 19:56:27 INFO nova.api.openstack.wsgi [req-9ef1fd31-0cec-4465-894a-f1ae0fa9a77e 320ce46de7e24a75a7ff8906d7355ff7 57ff99aae24b4035b52177a72s
> /usr/lib/python2.6/site-packages/nova/compute/api.py(2198)_validate_keypair_name()
-> safechars = "_- " + string.digits + string.ascii_letters
(Pdb) 
(Pdb)  n
> /usr/lib/python2.6/site-packages/nova/compute/api.py(2199)_validate_keypair_name()
-> clean_value = "".join(x for x in key_name if x in safechars)
(Pdb)  safechars
 '_- 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' 
(Pdb) 
(Pdb)  quit

So, the above indicates non-ascii characters are not allowed for key-pair names.

Clean up

Remove the debugging fragment from the source file. Start the nova-api server daemon gracefully —

 $ service openstack-nova-api start 

3 Comments

Filed under Uncategorized

OpenStack — nova image-create, under the hood

NOTE (05-OCT-2016): This post is outdated — the current code (in snapshot() and _live_snapshot() methods in libvirt/driver.py) for cold (where the guest is paused) snapshot & live snapshot has changed quite a bit. In short, they now use libvirt APIs managedSave(), for cold snapshot; and blockRebase(), for live snapshot.


I was trying to understand what kind of image nova image-create creates. It’s not entirely obvious from its help output, which says — Creates a new image by taking a snapshot of a running server. But what kind of snapshot? let’s figure.

nova image-create operations

The command is invoked as below

 $  nova image-create fed18 "snap1-of-fed18" --poll 

Drilling into nova’s source code — nova/virt/libvirt/driver.py — this is what image-create does:

  1. If the guest — based on which snapshot is to be taken — is running, nova calls libvirt’s virsh managedsave, which saves and stops a running guest, to be restarted later from the saved state.
  2. Next, it creates a qcow2 internal disk snapshot of the guest (now offline).
  3. Then, extracts the internal named snapshot from the qcow2 file & exports it to a RAW format and temporarily places in $instances_path/snapshots.
  4. Deletes the internal named snapshot from the qcow2 file.
  5. Finally, uploads that image into OpenStack glance service — which can be confirmed by running glance image-list.

Update: Steps 2 and 4 above are now effectively removed with this upstream change.

A simple test
To get a bit more clarity, let’s try Nova’s actions on a single qocw2 disk — with a running Fedora 18 OS — using libvirt’s shell virsh and QEMU’s qemu-img:

 
# Save the state and stop a running guest
$ virsh managedsave fed18 

# Create a qemu internal snapshot
$ qemu-img snapshot -c snap1 fed18.qcow2 

# Get information about the disk
$ qemu-img info fed18.qcow2 

# Extract the internal snapshot, 
# convert it to raw and export it a file
$ qemu-img convert -f qcow2 -O raw -s \
    snap1 fed18.qcow2 snap1-fed18.img 

# Get information about the new image
# extracted from the snapshot
$ qemu-img info snap1-fed18.img 

# List out file sizes of the original 
# and the snapshot
$ ls -lash fed18.qcow2 snap1-fed18.qcow2 

# Delete the internal snapshot 
# from the original disk
$ qemu-img snapshot -d snap1 fed18.qcow2 

# Again, get information of the original disk
$ qemu-img info fed18.qcow2 

# Start the guest again
$ virsh start fed18 

Thanks to Nikola Dipanov for helping me on where to look.

Update: A few things I missed to mention (thanks again for comments from Nikola) — I was using libvirt, kvm as underlying hypervisor technologies, with OpenStack Folsom release.

4 Comments

Filed under Uncategorized

Building qemu from upstream git source

From time to time, I test from upstream qemu to try out some specific area that I’m interested in (lately, block layer operations). Here’s how I try to build qemu from upstream git source, for a specific target — x86_64.

Install development tools (I’m using a Fedora 18 machine here). And a couple of other development libraries

$  yum groupinstall "Development Tools" 
$  yum install glib2-devel zlibrary zlib-devel -y 

Clone the qemu upstream source tree

$  git clone git://git.qemu.org/qemu.git  
$  cd qemu 

Run the qemu configure script for x86_64 architecture, and run make

$ ./configure --target-list=x86_64-softmmu \
--disable-werror --enable-debug 
$  make -j5 

Note: When the qemu configure script is invoked, it’ll throw an error asking you to either install the pixman package or fetch its git submodule.

I fetched pixman submodule

 $  git submodule update --init pixman 

If you have an OS disk image handy, you can invoke the newly built qemu binary to boot a guest on the stdio, assuming there’s a serial console enabled — having console=tty0 console=ttyS0,115200 inside your guest’s kernel command line in /etc/grub2.conf

$ ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm \
-smp 2 -m 1024 /export/images/fedora18.qcow2 \
-nographic

Note: With the above invocation, --enable-kvm parameter will make use of existing kvm kernel module from the system.

8 Comments

Filed under Uncategorized

OpenStack nova — dealing with unused base images

Nova uses disk image cache management to handle unused base disk images, validate their integrity, etc. In this entry, let’s try to find out how unused base images are cleaned up. For context on Openstack’s libvirt base images, take a look at Pádraig’s excellent write-up on life of an OpenStack libvirt image. And a prior related post on disk image usage by nova.

Set the relevant config directives
Let’s set periodic_interval=1 — the no. of seconds between running periodic tasks; image_cache_manager_interval=1 — the no. of periodic scheduler ticks to wait between runs of the image cache manager. I’ve set both of them to ‘1’ for test purpose, and grepped to ensure the change reflects

$ sudo echo -e "periodic_interval=1\nimage_cache_manager_interval=1" >> /etc/nova/nova.conf
$ sudo egrep '^image_cache_manager_interval|^periodic_interval' /etc/nova/nova.conf 
periodic_interval=1
image_cache_manager_interval=1
$

Restart all openstack nova services, so the config changes take effect:


$ for j in `for i in $(ls -1 /etc/init.d/openstack-nova-*) ;\
  do $i status | grep running ;\
  done | awk '{print $1}'` ; do service $j restart ; done

Delete a running nova instance, verify logs to see the effect of config changes

Let’s stop and delete the only nova instance running — so that nova’s image cache manager knows the image is no longer in use, and the previously enabled config options can take effect.

$ nova list
+--------------------------------------+---------+--------+-------------------+
| ID                                   | Name    | Status | Networks          |
+--------------------------------------+---------+--------+-------------------+
| 11c1997a-b1ff-46ef-b96c-f329576de8d6 | fedora4 | ACTIVE | net1=10.xx.yyy.zz |
+--------------------------------------+---------+--------+-------------------+
$ nova stop 11c1997a-b1ff-46ef-b96c-f329576de8d6
$ nova delete 11c1997a-b1ff-46ef-b96c-f329576de8d6
$ nova list
$ sudo virsh list --all
 Id    Name                           State
----------------------------------------------------

List out the base_images (now they should become invalid) :

$ sudo ls -lash /var/lib/nova/instances/_base/
total 1.7G
4.0K drwxr-xr-x. 2 nova nova 4.0K Feb 18 00:41 .
4.0K drwxr-xr-x. 3 nova nova 4.0K Feb 18 04:01 ..
739M -rw-r--r--. 1 nova nova 9.8G Feb 18 04:01 06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3
741M -rw-r--r--. 1 nova nova  20G Feb 18 04:01 06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
241M -rw-r--r--. 1 nova nova 241M Feb 18 00:41 06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3.part
$ 

At this point, the — now invalid — base_images should’ve been deleted. Let’s look at nova’s compute logs, location: /var/log/nova/compute.log

.
.
2012-02-18 04:13:03 37747 INFO nova.compute.resource_tracker [-] Compute_service record updated for interceptor.foo.bar.com 
2012-02-18 04:13:03 37747 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3
2012-02-18 04:13:03 37747 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 04:13:03 37747 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3 /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20

Now, it says, base-files are too young to remove:

.
.
012-01-18 04:16:46 37747 INFO nova.virt.libvirt.imagecache [-] Base file too young to remove: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3
2012-02-18 04:16:46 37747 INFO nova.virt.libvirt.imagecache [-] Base file too young to remove: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20

Let’s drill down a bit further, by searching in upstream nova code for how young is ‘too young’:

$ git clone git://github.com/openstack/nova.git && cd nova
$ grep FLAGS.remove_unused_original_minimum_age_seconds nova/virt/libvirt/imagecache.py
            maxage = FLAGS.remove_unused_original_minimum_age_seconds
$ git grep remove_unused_original_minimum_age_seconds
etc/nova/nova.conf.sample:# remove_unused_original_minimum_age_seconds=86400
nova/virt/libvirt/imagecache.py:    cfg.IntOpt('remove_unused_original_minimum_age_seconds',
nova/virt/libvirt/imagecache.py:            maxage = FLAGS.remove_unused_original_minimum_age_seconds
$

Ok, 86400 seconds (24 hours) appears to be the default time. Let’s, set that value to 60 seconds in nova.conf and restart all nova services again:

$ sudo echo -e "remove_unused_original_minimum_age_seconds=60" >> /etc/nova/nova.conf
$ sudo egrep '^image_cache_manager_interval|^periodic_interval|^remove_unused_original_minimum_age_seconds' /etc/nova/nova.conf 
periodic_interval=1
image_cache_manager_interval=1
remove_unused_original_minimum_age_seconds=60
$ 

Restart all nova services again:


$ sudo -i
$ for j in `for i in $(ls -1 /etc/init.d/openstack-nova-*) ;\
  do $i status | grep running ;\
  done | awk '{print $1}'` ;\
  do service $j restart ; done

Now, the /var/log/compute/compute.log indicates one of the base-files being removed, that’s good:

.
.
2012-02-18 04:24:17 41389 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810
a0d1d5d3_20
2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810
a0d1d5d3 /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removing base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3

Let’s check in the nova base images directory. Now, it seems like, two more base_images are still remaining:

$ sudo ls -1 /var/lib/nova/instances/_base/
06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3.part
$ 

Let’s get back to nova compute.log, it still says ‘Unknown base file’ and ‘Base file too young to remove’:

.
.
2012-02-18 04:50:45 41389 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 04:50:45 41389 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 04:50:45 41389 INFO nova.virt.libvirt.imagecache [-] Base file too young to remove: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20

Let’s check the nova compute.log again — now it’s indicating one of the other two unused base images being deleted — from time-stamps, in 10 minutes.

.
.
2012-02-18 05:01:24 41389 INFO nova.compute.resource_tracker [-] Compute_service record updated for interceptor.foo.bar.com 
2012-02-18 05:01:24 41389 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 05:01:24 41389 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 05:01:24 41389 INFO nova.virt.libvirt.imagecache [-] Removing base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20

Now, the only remaining one appears to be a .part image.

$ sudo ls -1 /var/lib/nova/instances/_base/
06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3.part
$

From further investigation by Padraig , the stale .part files turned out to be a different bug (for which he posted a fix upstream) — When converting non-raw (in this case, qcow2) to raw, libvirt leaves behind the original non-raw image on disk.

Finally
Let’s turn the config directives back to sane defaults, and comment out the config directives periodic_interval, image_cache_manager_interval. And, restart all nova services to take the config change effect.

$ sudo sed -i 's/remove_unused_original_minimum_age_seconds=60/remove_unused_originalminimum_age_seconds=86400/' foobar1.txt
$ sudo sed -i 's/periodic_interval=1/\#periodic_interval=1/;s/image_cache_manager_interval=1/\#image_cache_manager_interval=1/' >> /etc/nova/nova.conf
$ for j in `for i in $(ls -1 /etc/init.d/openstack-nova-*) ;\
   do $i status | grep running ;\
  done | awk '{print $1}'` ; do service $j restart ; done

UPDATE: I also had to set the config directive remove_unused_resized_minimum_age_seconds=60 in nova.conf, to remove the re-sized — according to the flavor chosen, in this case, it’s m1.small — raw images from the _base directory.

1 Comment

Filed under Uncategorized