Tag Archives: qemu

Notes for building KVM-based virtualization components from upstream git

I frequently need to have latest KVM, QEMU, libvirt and libguestfs while testing with OpenStack RDO. I either build from upstream git master branch or from Fedora Rawhide (mostly this suffices). Below I describe the exact sequence I try to build from git. These instructions are available in some form in the README files of the said packages, just noting them here explicitly for convenience. My primary development/test environment is Fedora, but it should be similar on other distributions. (Maybe I should just script it all.)

Build KVM from git

I think it’s worth noting the distinction (from traditional master branch) of these KVM git branches: remotes/origin/queue and remotes/origin/next. queue and next branches are same most of the time with the distinction that KVM queue is the branch where patches are usually tested before moving them to the KVM next branch. And, commits from next branch are submitted (as a PULL request) to Linus during the next Kernel merge window. (I recall this from an old conversation with Gleb Natapov (thank you), one of the previous KVM maintainers on IRC).

# Clone the repo
$ git clone \
  git://git.kernel.org/pub/scm/virt/kvm/kvm.git

# To test out of tree patches,
# it's cleaner to do in a new branch
$ git checkout -b test_branch

# Make a config file
$ make defconfig

# Compile
$ make -j4 && make bzImage && make modules

# Install
$ sudo -i
$ make modules_install && make install

Build QEMU from git

To build QEMU (only x86_64 target) from its git:

# Install buid dependencies of QEMU
$ yum-builddep qemu

# Clone the repo
$ git clone git://git.qemu.org/qemu.git

# Create a build directory to isolate source directory 
# from build directory
$ mkdir -p ~/build/qemu && cd ~/build/qemu

# Run the configure script
$ ~/src/qemu/./configure --target-list=x86_64-softmmu \
  --disable-werror --enable-debug 

# Compile
$ make -j4

I previously discussed about QEMU building here.

Build libvirt from git

To build libvirt from its upstream git:

# Install build dependencies of libvirt
$ yum-builddep libvirt

# Clone the libvirt repo
$ git clone git://libvirt.org/libvirt.git && cd libvirt

# Create a build directory to isolate source directory
# from build directory
$ mkdir -p ~/build/libvirt && cd ~/build/libvirt

# Run the autogen script
$ ../src/libvirt/autogen.sh

# Compile
$ make -j4

# Run tests
$ make check

# Invoke libvirt programs without having to install them
$ ./run tools/virsh [. . .]

[Or, prepare RPMs and install them]

# Make RPMs (assumes Fedora `rpmbuild` setup
# is properly configured)
$ make rpm

# Install/update
$ yum update *.rpm

Build libguestfs from git
To build libguestfs from its upstream git:

# Install build dependencies of libvirt
$ yum-builddep libguestfs

# Clone the libguestfs repo
$ git clone git://github.com/libguestfs/libguestfs.git \
   && cd libguestfs

# Run the autogen script
$ ./autogen.sh

# Compile
$ make -j4

# Run tests
$ make check

# Invoke libguestfs programs without having to install them
$ ./run guestfish [. . .]

If you’d rather prefer libguestfs to use the custom QEMU built from git (as noted above), QEMU wrappers are useful in this case.

Alternate to building from upstream git, if you’d prefer to build the above components locally from Fedora master here are some instructions .

Advertisements

1 Comment

Filed under Uncategorized

Multiple ways to access QEMU Machine Protocol (QMP)

Once QEMU is built, to get a finer understanding of it, or even for plain old debugging, having familiarity with QMP (QEMU Monitor Protocol) is quite useful. QMP allows applications — like libvirt — to communicate with a running QEMU’s instance. There are a few different ways to access the QEMU monitor to query the guest, get device (eg: PCI, block, etc) information, modify the guest state (useful to understand the block layer operations) using QMP commands. This post discusses a few aspects of it.

Access QMP via libvirt’s qemu-monitor-command
Libvirt had this capability for a long time, and this is the simplest. It can be invoked by virsh — on a running guest, in this case, called ‘devstack’:

$ virsh qemu-monitor-command devstack \
--pretty '{"execute":"query-kvm"}'
{
    "return": {
        "enabled": true,
        "present": true
    },
    "id": "libvirt-8"
}

In the above example, I ran the simple command query-kvm which checks if (1) the host is capable of running KVM (2) and if KVM is enabled. Refer below for a list of possible ‘qeury’ commands.

QMP via telnet
To access monitor via any other way, we need to have qemu instance running in control mode, via telnet:

$ ./x86_64-softmmu/qemu-system-x86_64 \
  --enable-kvm -smp 2 -m 1024 \
  /export/images/el6box1.qcow2 \
  -qmp tcp:localhost:4444,server --monitor stdio
QEMU waiting for connection on: tcp::127.0.0.14444,server
VNC server running on `127.0.0.1:5900'
QEMU 1.4.50 monitor - type 'help' for more information
(qemu)

And, from a different shell, connect to that listening port 4444 via telnet:

$ telnet localhost 4444

Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 1}, "package": ""}, "capabilities": []}}

We have to first enable QMP capabilities. This needs to be run before invoking any other commands, do:

{ "execute": "qmp_capabilities" }

QMP via unix socket
First, invoke the qemu binary in control mode using qmp, and create a unix socket as below:

$ ./x86_64-softmmu/qemu-system-x86_64 \
  --enable-kvm -smp 2 -m 1024 \
  /export/images/el6box1.qcow2 \
  -qmp unix:./qmp-sock,server --monitor stdio
QEMU waiting for connection on: unix:./qmp-sock,server

A few different ways to connect to the above qemu instance running in control mode, vi QMP:

  1. Firstly, via nc :

    $ nc -U ./qmp-sock
    {"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 1}, "package": ""}, "capabilities": []}}
    
  2. But, with the above, you have to manually enable the QMP capabilities, and type each command in JSON syntax. It’s a bit cumbersome, & no history of commands typed is saved.

  3. Next, a more simpler way — a python script called qmp-shell is located in the QEMU source tree, under qemu/scripts/qmp/qmp-shell, which hides some details — like manually running the qmp_capabilities.

    Connect to the unix socket using the qmp-shell script:

    $ ./qmp-shell ../qmp-sock 
    Welcome to the QMP low-level shell!
    Connected to QEMU 1.4.50
    
    (QEMU) 
    

    Then, just hit the key, and all the possible commands would be listed. To see a list of query commands:

    (QEMU) query-<TAB>
    query-balloon               query-commands              query-kvm                   query-migrate-capabilities  query-uuid
    query-block                 query-cpu-definitions       query-machines              query-name                  query-version
    query-block-jobs            query-cpus                  query-mice                  query-pci                   query-vnc
    query-blockstats            query-events                query-migrate               query-status                
    query-chardev               query-fdsets                query-migrate-cache-size    query-target                
    (QEMU) 
    
  4. Finally, we can also acess the unix socket using socat and rlwrap. Thanks to upstream qemu developer Markus Armbruster for this hint.

    Invoke it this way, also execute a couple of commands — qmp_capabilities, and query-kvm, to view the response from the server.

    $ rlwrap -H ~/.qmp_history \
      socat UNIX-CONNECT:./qmp-sock STDIO
    {"QMP": {"version": {"qemu": {"micro": 50, "minor": 4, "major": 1}, "package": ""}, "capabilities": []}}
    {"execute":"qmp_capabilities"}
    {"return": {}}
    { "execute": "query-kvm" }
    {"return": {"enabled": true, "present": true}}
    

    Where, qmp_history contains recently ran QMP commands in JSON syntax. And rlwrap adds decent editing capabilities, recursive search & history. So, once you run all your commands, the ~/.qmp_history has a neat stack of all the QMP commands in JSON syntax.

    For instance, this is what my ~/.qmp_history file contains as I write this:

    $ cat ~/.qmp_history
    { "execute": "qmp_capabilities" }
    { "execute": "query-version" }
    { "execute": "query-events" }
    { "execute": "query-chardev" }
    { "execute": "query-block" }
    { "execute": "query-blockstats" }
    { "execute": "query-cpus" }
    { "execute": "query-pci" }
    { "execute": "query-kvm" }
    { "execute": "query-mice" }
    { "execute": "query-vnc" }
    { "execute": "query-spice " }
    { "execute": "query-uuid" }
    { "execute": "query-migrate" }
    { "execute": "query-migrate-capabilities" }
    { "execute": "query-balloon" }
    

To illustrate, I ran a few query commands (noted above) which provides an informative response from the server — no change is done to the state of the guest — so these can be executed safely.

I personally prefer the libvirt way, & accessing via unix socket with socat & rlwrap.

NOTE: To try each of the above variants, fisrst quit — type quit on the (qemu) shell — the qemu instance running in control mode, reinvoke it, then access it via one of the 3 different ways.

4 Comments

Filed under Uncategorized

OpenStack — nova image-create, under the hood

NOTE (05-OCT-2016): This post is outdated — the current code (in snapshot() and _live_snapshot() methods in libvirt/driver.py) for cold (where the guest is paused) snapshot & live snapshot has changed quite a bit. In short, they now use libvirt APIs managedSave(), for cold snapshot; and blockRebase(), for live snapshot.


I was trying to understand what kind of image nova image-create creates. It’s not entirely obvious from its help output, which says — Creates a new image by taking a snapshot of a running server. But what kind of snapshot? let’s figure.

nova image-create operations

The command is invoked as below

 $  nova image-create fed18 "snap1-of-fed18" --poll 

Drilling into nova’s source code — nova/virt/libvirt/driver.py — this is what image-create does:

  1. If the guest — based on which snapshot is to be taken — is running, nova calls libvirt’s virsh managedsave, which saves and stops a running guest, to be restarted later from the saved state.
  2. Next, it creates a qcow2 internal disk snapshot of the guest (now offline).
  3. Then, extracts the internal named snapshot from the qcow2 file & exports it to a RAW format and temporarily places in $instances_path/snapshots.
  4. Deletes the internal named snapshot from the qcow2 file.
  5. Finally, uploads that image into OpenStack glance service — which can be confirmed by running glance image-list.

Update: Steps 2 and 4 above are now effectively removed with this upstream change.

A simple test
To get a bit more clarity, let’s try Nova’s actions on a single qocw2 disk — with a running Fedora 18 OS — using libvirt’s shell virsh and QEMU’s qemu-img:

 
# Save the state and stop a running guest
$ virsh managedsave fed18 

# Create a qemu internal snapshot
$ qemu-img snapshot -c snap1 fed18.qcow2 

# Get information about the disk
$ qemu-img info fed18.qcow2 

# Extract the internal snapshot, 
# convert it to raw and export it a file
$ qemu-img convert -f qcow2 -O raw -s \
    snap1 fed18.qcow2 snap1-fed18.img 

# Get information about the new image
# extracted from the snapshot
$ qemu-img info snap1-fed18.img 

# List out file sizes of the original 
# and the snapshot
$ ls -lash fed18.qcow2 snap1-fed18.qcow2 

# Delete the internal snapshot 
# from the original disk
$ qemu-img snapshot -d snap1 fed18.qcow2 

# Again, get information of the original disk
$ qemu-img info fed18.qcow2 

# Start the guest again
$ virsh start fed18 

Thanks to Nikola Dipanov for helping me on where to look.

Update: A few things I missed to mention (thanks again for comments from Nikola) — I was using libvirt, kvm as underlying hypervisor technologies, with OpenStack Folsom release.

4 Comments

Filed under Uncategorized

Building qemu from upstream git source

From time to time, I test from upstream qemu to try out some specific area that I’m interested in (lately, block layer operations). Here’s how I try to build qemu from upstream git source, for a specific target — x86_64.

Install development tools (I’m using a Fedora 18 machine here). And a couple of other development libraries

$  yum groupinstall "Development Tools" 
$  yum install glib2-devel zlibrary zlib-devel -y 

Clone the qemu upstream source tree

$  git clone git://git.qemu.org/qemu.git  
$  cd qemu 

Run the qemu configure script for x86_64 architecture, and run make

$ ./configure --target-list=x86_64-softmmu \
--disable-werror --enable-debug 
$  make -j5 

Note: When the qemu configure script is invoked, it’ll throw an error asking you to either install the pixman package or fetch its git submodule.

I fetched pixman submodule

 $  git submodule update --init pixman 

If you have an OS disk image handy, you can invoke the newly built qemu binary to boot a guest on the stdio, assuming there’s a serial console enabled — having console=tty0 console=ttyS0,115200 inside your guest’s kernel command line in /etc/grub2.conf

$ ./x86_64-softmmu/qemu-system-x86_64 --enable-kvm \
-smp 2 -m 1024 /export/images/fedora18.qcow2 \
-nographic

Note: With the above invocation, --enable-kvm parameter will make use of existing kvm kernel module from the system.

8 Comments

Filed under Uncategorized