Documentation of QEMU Block Device Operations

QEMU Block Layer currently (as of QEMU 2.10) supports four major kinds of live block device jobs – stream, commit, mirror, and backup. These can be used to manipulate disk image chains to accomplish certain tasks, e.g.: live copy data from backing files into overlays; shorten long disk image chains by merging data from overlays into backing files; live synchronize data from a disk image chain (including current active disk) to another target image; and point-in-time (and incremental) backups of a block device.

To that end, recently I have written documentation (thanks to the QEMU Block Layer maintainers & developers for the reviews) of the usage of following commands:

  • block-stream
  • block-commit
  • drive-mirror (and blockdev-mirror)
  • drive-backup (and blockdev-backup)

Each of the above block device jobs, their QMP (QEMU Machine Protocol) invocation examples are documented.

Here’s the source. And here’s the Sphinx-rendered HTML version.

This documentation can be handy in those (debugging) scenarios when it’s instructive to look at what is happening behind the scenes of QEMU. For example, live storage migration (without shared storage setup) is one of the most common use-cases that takes advantage of the QMP drive-mirror command and QEMU’s built-in Network Block Device (NBD) server. Here’s the QMP-level workflow for it — this is the flow libvirt internally implements (with some additional niceties).

Leave a comment

Filed under Uncategorized

QEMU Advent Calendar 2016

The QEMU Advent Calendar website 2016 features a QEMU disk image each day from 01-DEC to 24 DEC. Each day a new package becomes available for download (of format tar.xz) which contains a file describing the image (readme.txt or similar), and a little run shell script that starts QEMU with the recommended command-line parameters for the disk image.

The disk images contain interesting operating systems and software that run under the QEMU emulator. Some of them are well-known or not-so-well-known operating systems, old and new, others are custom demos and neat algorithms.” [From the About section.]

This is brought to you by Thomas Huth (his initial announcement here) and yours truly.


Explore the last five days of images from the 2016 edition here! [Extract the download with, e.g. for Day 05: tar -xf.day05.tar.xz]

PS: We still have a few open slots, so please don’t hesitate to contact if you have any fun disk image(s) to contribute.

Leave a comment

Filed under Uncategorized

LinuxCon talk slides: “A Practical Look at QEMU’s Block Layer Primitives”

Last week I spent time at LinuxCon (and the co-located KVM Forum) Toronto. I presented a talk on QEMU’s block layer primitives. Specifically, the QMP primitives block-commit, drive-mirror, drive-backup, and QEMU’s built-in NBD (Network Block Device) server.

Here are the slides.

Leave a comment

Filed under Uncategorized

QEMU command-line: behavior of ‘-serial stdio’ vs. ‘-serial mon:stdio’

[Thanks: To Paolo Bonzini for writing the original patch; and to Alexander Graf for mentioning this on IRC some months ago, I had to go and look up the logs.]

I forget to remember: to avoid QEMU being terminated on SIGINT (Ctrl+c), instead of just stdio, supply the special parameter mon:stdio to -serial option (which redirects the virtual serial port to a host character device). The subtle difference between them:

  • -serial stdio: Redirects the virtual serial port onto stdio; upon Ctrl+c, QEMU immediately terminates.
  • -serial mon:stdio: In this mode, the virtual serial port and QEMU monitor are multiplexed onto stdio. And Ctrl+c is handled, i.e. QEMU won’t be terminated, and the signal will be passed to the guest.

In summary, a working example command-line:

$ qemu-system-x86_64               \
   -display none                   \
   -nodefconfig                    \
   -nodefaults                     \
   -m 2048                         \
   -device virtio-scsi-pci,id=scsi \
   -device virtio-serial-pci       \
   -serial mon:stdio  \
   -drive file=./fedora23.qcow2,format=qcow2,if=virtio

To toggle between QEMU monitor prompt and the serial console, use Ctrl+a, followed by typing ‘c’. To terminate QEMU, while at monitor prompt, (qemu), type ‘quit’.

1 Comment

Filed under Uncategorized

CloudOpen (Dublin) 2015 talk slides: “Debugging the Virtualization layer (libvirt and QEMU) in OpenStack”

I just presented a talk on “Debugging the Virtualization layer (libvirt and QEMU) in OpenStack” at LinuxCon/CloudOpen Europe that is currently happening in Dublin, Ireland.

Slides are located here.

Leave a comment

Filed under Uncategorized

Minimal DevStack with OpenStack Neutron networking

This post discusses a way to setup minimal DevStack (OpenStack development environment from git sources) with Neutron networking, in a virtual machine.

(a) Setup minimal DevStack environment in a VM

Prepare VM, take a snapshot

Assuming you have a Linux (Fedora 21 or above or any of Debian variants) virtual machine setup (with at-least 8GB memory and 40GB of disk space), take a quick snapshot. The below creates a QCOW2 internal snapshot (that means, your disk image should be a QCOW2 image), you can invoke it live or offline:

 $ virsh snapshot-create-as devstack-vm cleanslate

So that if something goes wrong, you can revert to this clean state, by simple doing:

 $ virsh snapshot-revert devstack-vm cleanslate

Setup DevStack

There’s plenty of configuration variants to set up DevStack. The upstream documentation has its own recommendation of minimal configuration. The below configuration is of much smaller foot-print, which configures only: Nova (Compute, Scheduler, API and Conductor services), Keystone Neutron and Glance (Image service) services.

$ mkdir -p $HOME/sources/cloud
$ git clone https://git.openstack.org/openstack-dev/devstack
$ chmod go+rx $HOME
$ cat << EOF > local.conf
[[local|localrc]]
DEST=$HOME/sources/cloud
DATA_DIR=$DEST/data
SERVICE_DIR=$DEST/status
SCREEN_LOGDIR=$DATA_DIR/logs/
LOGFILE=/$DATA_DIR/logs/devstacklog.txt
VERBOSE=True
USE_SCREEN=True
LOG_COLOR=True
RABBIT_PASSWORD=secret
MYSQL_PASSWORD=secret
SERVICE_TOKEN=secret
SERVICE_PASSWORD=secret
ADMIN_PASSWORD=secret
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.1.0.1
FIXED_RANGE=10.1.0.0/24
FIXED_NETWORK_SIZE=256
FORCE_CONFIG_DRIVE=always
VIRT_DRIVER=libvirt
# To use nested KVM, un-comment the below line
# LIBVIRT_TYPE=kvm
IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img"
# If you have `dnf` package manager, use it to improve speedups in DevStack build/tear down
export YUM=dnf

NOTE: If you’re using KVM-based virtualization under the hood, refer this upstream documentation on setting it up with DevStack, so that the VMs in your OpenStack cloud (i.e. Nova instances) can run, relatively, faster than with plain QEMU emulation. So, if you have the relevant hardware, you might want setup that before proceeding further.

Invoke the install script:

 $ ./stack.sh 

[27MAR2015 Update]: Don’t forget to systemctl enable the below services so they start on reboot — this allows you to successfully start all OpenStack services when you reboot your DevStack VM:

 $ systemctl enable openvswitch mariadb rabbitmq-server 

(b) Configure Neutron networking

Once DevStack installation completes successfully, let’s setup Neutron networking.

Set Neutron router and add security group rules

(1) Source the user tenant (‘demo’ user) credentials:

 $ . openrc demo

(2) Enumerate Neutron security group rules:

$ neutron security-group-list

(3) Create a couple of environment variables, for convenience, capturing the IDs of Neutron public, private networks and router:

$ PUB_NET=$(neutron net-list | grep public | awk '{print $2;}')
$ PRIV_NET=$(neutron net-list | grep private | awk '{print $2;}')
$ ROUTER_ID=$(neutron router-list | grep router1 | awk '{print $2;}')

(4) Set the Neutron gateway for router:

$ neutron router-gateway-set $ROUTER_ID $PUB_NET

(5) Add security group rules to enable ping and ssh:

$ neutron security-group-rule-create --protocol icmp \
    --direction ingress --remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create --protocol tcp  \
    --port-range-min 22 --port-range-max 22 --direction ingress default

Boot a Nova instance

Source the ‘demo’ user’s Keystone credentials, add a Nova key pair, and boot an ‘m1.small’ flavored CirrOS instance:

$ . openrc demo
$ nova keypair-add oskey1 > oskey1.priv
$ chmod 600 oskey1.priv
$ nova boot --image cirros-0.3.3-x86_64-disk \
    --nic net-id=$PRIV_NET --flavor m1.small \
    --key_name oskey1 cirrvm1 --security_groups default

Create a floating IP and assign it to the Nova instance

The below sequence of commands enumerate the Nova instance, finds the Neutron port ID for a specific instance. Then, creates a floating IP, associates it to the Nova instance. Then, again, enumerates the Nova instances, so you can notice both floating and fixed IPs for it:

 
$ nova list
$ neutron port-list --device-id $NOVA-INSTANCE-UUID
$ neutron floatingip-create public
$ neutron floatingip-associate $FLOATING-IP-UUID $PORT-ID-OF-NOVA-INSTANCE
$ nova list

A new tenant network creation can be trivially done with a script like this.

Optionally, test the networking setup by trying to ping or ssh into the CirrOS Nova instance.


Given the procedural nature of the above, all of this can be trivially scripted to suit your needs — in fact, upstream OpenStack Infrastructure does use such automated DevStack environments to gate (as part of CI) every change that is submitted to any of the various OpenStack projects.

Finally, to find out how minimal it really is, one way to test is to check the memory footprint inside the DevStack VM, using the ps_mem tool and compare that with a Different DevStack environment with more OpenStack services enabled. Edit: A quick memory profile in a Minimal DevStack environment here — 1.3GB of memory without any Nova instances running (but, with OpenStack services running).

2 Comments

Filed under Uncategorized

Notes from a talk on “Advanced snapshots with libvirt and QEMU”

I just did a talk at a small local conference (Infrastructure.Next, co-located with cfgmgmtcamp.eu) about Advanced Snapshots in libvirt and QEMU.

Slides are here, which contains URLs to more examples.

And, some more related notes here:

4 Comments

Filed under Uncategorized

Cubietruck: QEMU, KVM and Fedora

Rich Jones previoulsy wrote here on how he got KVM working on Cubietruck — it was Fedora-19 timeframe. It wasn’t quite straight forwad then: you had to build a custom Kernel, custom U-Boot (Universal-Boot), etc.

Recently I got a Cubietruck for testing. First thing I wanted to try was to boot a Fedora-21 KVM guest. It worked just fine — ensure to supply -cpu host parameter to QEMU invocation (Rich actually mentions this at the end of his post linked above). Why? CubieTruck has a Cortex-A7 processor, for which QEMU doesn’t have specific support, so you’d need need to use the same CPU as the host to boot a KVM guest. Peter Maydell, one of the QEMU upstream maintainers, explains a little more here on the why (a Kernel limitation).

Below is what I ended up doing to successfully boot a Fedora 21 guest on Cubietruck.

  • I downloaded the Fedora 21-Beta ARM image and wrote it to a microSD card. (I used this procedure.)
  • The above Fedora ARM image didn’t have KVM module. Josh Boyer on IRC said I might need a (L)PAE ARM Kernel. Installing it (3.17.4-302.fc21.armv7hl+lpae) did it — this is built with KVM module enabled.
  • Resized the root filesystem of Fedora 21 on the microSD card. (Procedure I used.)
  • Then, I tried a quick test to boot a KVM guest with libguestfs-test-tool. But the guest boot failed with:
    “kvm_init_vcpu failed: Invalid argument”. So, I filed a libguestfs bug to track this. After a bit of trial and error on IRC, I made a small edit to libguestfs source so that the KVM appliance is created with -cpu host. Rich suggested I sumbit this as a patch — which resulted in this libguestfs commit (resolving the bug I filed).
  • With this in place, a KVM guest boots successfully. (Complete output of boot via libguestfs appliance is here.) Also, I tested importing a Cirros-0.3.3 disk image into libvirt and run it succesfully.


[Trivia: Somehow the USB to TTL CP2102 serial converter cable I bought didn’t show boot (nor any other) messages when I accessed it via UART (with ‘ screen /dev/ttyUSB0 115200‘). Fortunately, the regular cable that was shipped with the Cubietruck worked just fine. Still, I’d like to find what’s up with the CP2012 cable, since it was deteced as a device in my systemd logs. ]

3 Comments

Filed under Uncategorized

LinuxCon/KVMForum/CloudOpen Eu 2014

While the Linux Foundation’s colocated events (LinuxCon/KVMForum/CloudOpen, Plumbers and a bunch of others) are still in progress (Düsseldorf, Germany), thought I’d quickly write a note here.

Some slides and demo notes on managing snapshots/disk image chains with libvirt/QEMU. And, some additional examples with a bit of commentary. (Thanks to Eric Blake, of libvirt project, for reviewing some of the details there.)

1 Comment

Filed under Uncategorized

libvirt blockcommit: shorten disk image chain by live merging the current active disk content

When using QCOW2-based external snapshots, it is desirable to reduce an entire disk image chain to a single disk to retain performance and increase while the guest is running. Upstream QEMU and libvirt has recently acquired the ability to do that. Relevant git commits for QEMU (Jeff Cody) and libvirt (Eric Blake).

This is best illustrated with a quick example.

Let’s start with the below disk image chain as below for a guest called vm1. For simplicity’s sake:

[base] <-- [sn1] <-- [sn2] <-- [current] (live QEMU)

Once live active block commit operation is complete (step 5 below), the result will be a flattened disk image chain where data from sn1, sn2 and current are live commited into base:

 [base] (live QEMU)

(1) List the current active image in use:

$ virsh domblklist vm1
Target     Source
------------------------------------------------
vda        /export/images/base.qcow2

(2) For a quick test, create external snapshots. (And, repeat the above operation two more times, so we have the chain: [base] <– [sn1] <– [sn2] <– [current] )

$ virsh snapshot-create-as \
   --domain vm1 snap1 \
   --diskspec vda,file=/export/images/sn1.qcow2 \
   --disk-only --atomic

(3) Enumerate the backing file chain:

$ qemu-img info --backing-chain current.qcow2
[. . .] # output discarded for brevity

(4) Again, check the current active disk image:

$ virsh domblklist vm1
Target     Source
------------------------------------------------
vda        /export/images/current.qcow2

(5) Live Active commit an entire chain, including pivot:

$ virsh blockcommit vm1 vda \
   --active --pivot --verbose
Block Commit: [100 %]
Successfully pivoted

Explanation:

  • –active: It performs a two stage operation: first stage – it commits the contents from top images into base (i.e. sn1, sn2, current into base); in the second stage, the block operation remains awake to synchronize any further changes (from top images into base), here the user can take two actions: cancel the job, or pivot the job, i.e. adjust the base image as the current active image.
  • –pivot: Once data is committed from sn1, sn2 and current into base, it pivots the live QEMU to use base as the active image.
  • –verbose: Displays a progress of block operation.
  • Finally, the disk image backing chain is shortened to a single disk image.

(6) Optionally, list the current active image in use. It’s now back to ‘base’ which has all the contents from current, sn2, sn1):

$ virsh domblklist vm1
Target     Source
------------------------------------------------
vda        /export/images/base.qcow2

9 Comments

Filed under Uncategorized