Previously, I posted about snapshots here , which briefly discussed different types of snapshots. In this post, let’s explore how external snapshots work. Just to quickly rehash, external snapshots are a type of snapshots where, there’s a base image(which is the original disk image), and then its difference/delta (aka, the snapshot image) is stored in a new QCOW2 file. Once the snapshot is taken, the original disk image will be in a ‘read-only’ state, which can be used as backing file for other guests.
It’s worth mentioning here that:
- The original disk image can be either in RAW format or QCOW2 format. When a snapshot is taken, ‘the difference’ will be stored in a different QCOW2 file
- The virtual machine has to be running, live. Also with Live snapshots, no guest downtime is experienced when a snapshot is taken.
- At this moment, external(Live) snapshots work for ‘disk-only’ snapshots(and not VM state). Work for both disk and VM state(and also, reverting to external disk snapshot state) is in-progress upstream(slated for libvirt-0.10.2).
Before we go ahead, here’s some version info, I’m testing on Fedora-17(host), and the guest(named ‘testvm’) is running Fedora-18(Test Compose):
$ rpm -q libvirt qemu-kvm ; uname -r libvirt-0.10.1-3.fc17.x86_64 qemu-kvm-1.2-0.2.20120806git3e430569.fc17.x86_64 3.5.2-3.fc17.x86_64 $
External disk-snapshots(live) using QCOW2 as original image:
Let’s see an illustration of external(live) disk-only snapshots. First, let’s ensure the guest is running:
$ virsh list Id Name State ---------------------------------------------------- 3 testvm running $
Then, list all the block devices associated with the guest:
$ virsh domblklist testvm --details Type Device Target Source ------------------------------------------------ file disk vda /export/vmimgs/testvm.qcow2 $
Next, let’s create a snapshot(disk-only) of the guest this way, while the guest is running:
$ virsh snapshot-create-as testvm snap1-testvm "snap1 description" \ --diskspec vda,file=/export/vmimgs/snap1-testvm.qcow2 \ --disk-only --atomic
Some details of the flags used:
– Passing a ‘–diskspec’ parameter adds the ‘disk’ elements to the Snapshot XML file
– ‘–disk-only’ parameter, takes the snapshot of only the disk
– ‘–atomic’ just ensures either the snapshot is run completely or fails w/o making any changes
Let’s check the information about the just taken snapshot by running qemu-img:
$ qemu-img info /export/vmimgs/snap1-testvm.qcow2 image: /export/vmimgs/snap1-testvm.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 2.5M cluster_size: 65536 backing file: /export/vmimgs/testvm.qcow2 $
Apart from the above, I created 2 more snapshots(just the same syntax as above) for illustration purpose. Now, the snapshot-tree looks like this:
$ virsh snapshot-list testvm --tree snap1-testvm | +- snap2-testvm | +- snap3-testvm $
For the above example image file chain[ base<-snap1<-snap2<-snap3 ], it has to be read as – snap3 has snap2 as its backing file, snap2 has snap1 as its backing file, and snap1 has the base image as its backing file. We can see the backing file info from qemu-img:
#--------------------------------------------# $ qemu-img info /export/vmimgs/snap3-testvm.qcow2 image: /export/vmimgs/snap3-testvm.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 129M cluster_size: 65536 backing file: /export/vmimgs/snap2-testvm.qcow2 #--------------------------------------------# $ qemu-img info /export/vmimgs/snap2-testvm.qcow2 image: /export/vmimgs/snap2-testvm.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 3.6M cluster_size: 65536 backing file: /export/vmimgs/snap1-testvm.qcow2 #--------------------------------------------# $ qemu-img info /export/vmimgs/snap1-testvm.qcow2 image: /export/vmimgs/snap1-testvm.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 2.5M cluster_size: 65536 backing file: /export/vmimgs/testvm.qcow2 $ #--------------------------------------------#
Now, if we do not need snap2 any more, and want to pull all the data from snap1 into snap3, making snap1 as snap3’s backing file, we can do a virsh blockpull operation as below:
#--------------------------------------------# $ virsh blockpull --domain testvm \ --path /export/vmimgs/snap3-testvm.qcow2 \ --base /export/vmimgs/snap1-testvm.qcow2 \ --wait --verbose Block Pull: [100 %] Pull complete #--------------------------------------------#
Where, –path = path to the snapshot file, and –base = path to a backing file from which the data to be pulled. So from above example, it’s evident that we’re pulling the data from snap1 into snap3, and thus flattening the backing file chain resulting in snap1 as snap3’s backing file, which can be noticed by running qemu-img again.
Thing to note here,
$ qemu-img info /export/vmimgs/snap3-testvm.qcow2 image: /export/vmimgs/snap3-testvm.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 145M cluster_size: 65536 backing file: /export/vmimgs/snap1-testvm.qcow2 $
A couple of things to note here, after discussion with Eric Blake(thank you):
- If we do a listing of the snapshot tree again(now that ‘snap2-testvm.qcow2’ backing file is no more in use),
$ virsh snapshot-list testvm --tree snap1-testvm | +- snap2-testvm | +- snap3-testvm $
one might wonder, why is snap3 still pointing to snap2? Thing to note here is, the above is the snapshot chain, which is independent from each virtual disk’s backing file chain. So, the ‘virsh snapshot-list’ is still listing the information accurately at the time of snapshot creation(and not what we’ve done after creating the snapshot). So, from the above snapshot tree, if we were to revert to snap1 or snap2 (when revert-to-disk-snapshots is available), it’d still be possible to do that, meaning:
It’s possible to go from this state:
base <- snap123 (data from snap1, snap2 pulled into snap3)
we can still revert to:
base<-snap1 (thus undoing the changes in snap2 & snap3)
External disk-snapshots(live) using RAW as original image:
With external disk-snapshots, the backing file can be RAW as well (unlike with ‘internal snapshots’ which only work with QCOW2 files, where the snapshots and delta are all stored in a single QCOW2 file)
A quick illustration below. The commands are self-explanatory. It can be noted the change(from RAW to QCOW2) in the block disk associated with the guest, before & after taking the disk-snapshot (when virsh domblklist command was executed)
#-------------------------------------------------# $ virsh list | grep f17btrfs2 7 f17btrfs2 running $ #-------------------------------------------------# $ qemu-img info /export/vmimgs/f17btrfs2.img image: /export/vmimgs/f17btrfs2.img file format: raw virtual size: 20G (21474836480 bytes) disk size: 1.5G $ #-------------------------------------------------# $ virsh domblklist f17btrfs2 --details Type Device Target Source ------------------------------------------------ file disk hda /export/vmimgs/f17btrfs2.img $ #-------------------------------------------------# $ virsh snapshot-create-as f17btrfs2 snap1-f17btrfs2 \ "snap1-f17btrfs2-description" \ --diskspec hda,file=/export/vmimgs/snap1-f17btrfs2.qcow2 \ --disk-only --atomic Domain snapshot snap1-f17btrfs2 created $ #-------------------------------------------------# $ qemu-img info /export/vmimgs/snap1-f17btrfs2.qcow2 image: /export/vmimgs/snap1-f17btrfs2.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 196K cluster_size: 65536 backing file: /export/vmimgs/f17btrfs2.img $ #-------------------------------------------------# $ virsh domblklist f17btrfs2 --details Type Device Target Source ------------------------------------------------ file disk hda /export/vmimgs/snap1-f17btrfs2.qcow2 $ #-------------------------------------------------#
Also note: All snapshot XML files, where libvirt tracks the metadata of snapshots are are located under /var/lib/libvirt/qemu/snapshots/$guestname (and the original libvirt xml file is located under /etc/libvirt/qemu/$guestname.xml)
Great post ! It’s always a pleasure to read you. Thank you
Glad you liked it.
hi,when I run: virsh snapshot-create-as vm1352713868063 snap-063 “snap descrptioin” –diskspec vda,file=/home/snap-063.qcow2 –disk-only
the error is:
error: unsupported configuration: external snapshot file for disk vda already exists and is not a block device: /home/snap-063.qcow2
I want to know what meaning of “disk vda already exists and is not a block deice snap-063.qcow2”
libvirt-version:0.9.8 on ubuntu12.04
thanks very much for your any reply.
liuzhijun
Can you provide output for :
virsh snapshot-list vmname
qemu-img info /path/to/your/vmaname.qcow2 ?
Also, can you please try with libvirt 1.0 release? which is the newest one.
hi,when I run: virsh snapshot-create-as vm135271868063 snap-063 “snap description” –deskspec vda,file=/home/snap-063.qcow2 –disk-only
the error is:
error: unsupported configuration: external snapshot file for disk vda already exists and is not a block device: /home/snap-063.qcow2
libvirt version :0.9.8 on ubuntu12.04
thanks in advance for your any replay.
liuzhijun
Well great post but… How Can I manage to discard completely a snapshot and flatten to base image freeing space and prepearing for a new snapshot?
I would like to script these passage to backup data of a running mail server so:
1) take a snapshot
2) mount it read-only
3) rsync it on destination
4) discard the snapshot, thus flattening the snap file into the base one and removing the snapshot.
Does this work for you ? https://kashyapc.wordpress.com/2013/01/22/live-backup-with-external-disk-snapshots-and-libvirts-blockpull/
In short, it does this:
1/ Create an external live, disk-only snapshot (say, base.qcow2), as a result, a new overlay disk image would be the current active image (sn1-of-base.qcow2)
2/ You can back-up the original disk image (base.qcow2) using rsync or similar
3/ If changes are accumulated into the current active-layer (sn1-of-base.qcow2), merge the contents from original disk image (base.qcow2) into sn1-of-base.qcow2 using ‘blockpull’
4/ As a result of step-3/, sn1-of-base.qcow2 is now a standalone disk image.
If you need to take further back-ups, you can repeat the above process.
I also noted an example of live-backups using ‘blockcopy’ — http://kashyapc.fedorapeople.org/virt/lc-2012/live-backup-with-blockcopy.txt
Hope that helps
Many thanks for the prompt response.
I have to test it on a non-production environment and then I’ll tell you if it is right for me.
Regards.
Hi, I have a problem… virsh snapshot-create-as retuns me an error stating that –atomic is not supported. Also if I remove –atomic (just for test) i get
internal error unable to execute QEMU command ‘blockdev-snapshot-sync’: An undefined error has ocurred
The command I gave for my test is:
virsh snapshot-create test_posta snap1_test_posta “backup in corso” –diskspec vdb,file=/media/virtual/snap-mail_opt-clone.qcow2 –disk-only –atomic
Ok, so sorry, it was an apparmor issue, if interested, i can post the solution.
@Mattia if your solution doesn’t include turning of apparmor, I would be overjoyed for that information :-)
Well, I stopped trying because I ran out of time but I successfully managed to obtain a live snapshot but I couldn’t revert or discard. So for my needings is totally useless. I only use the ufficial KVM package in ubuntu 12.04 and recently It have been updated and I didn’t try againg. I never turned off apparmor.
Hi Kashyap,
Great post!
Do you have any article that talks about KVM snapshot on multiple disks of same VM?
I am looking for same examples and some best practices.
How is the data consistency of overall VM maintained when we take disk snapshot one at a time?
Thanks!
Hi kashyapc,
One simple question. After create this external snapshot I can observe that libvirt is unable to manage its deletion:
# virsh snapshot-delete hermes hermes-snap1
error: Failed to delete snapshot hermes-snap1
error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
I could delete the xml with –metadata option, but domblklist is still showing the snap disk (despite I have removed it with rm):
# virsh snapshot-delete hermes hermes-snap1 –metadata
# virsh snapshot-list hermes
Name Creation Time State
————————————————————
# virsh domblklist hermes
Target Source
————————————————
vda /var/lib/libvirt/images/hermes.hermes-snap1
So, is there any way to remove these external snapshots completely?
Thank you for your time,
Gonzo
Gonzo,
Yes, there’s not really any elegant way to remove external snapshots at the moment. However, there are a few ways to do that with recent enough libvirt and QEMU and its ‘active blockcommit’ feature. I wrote a post about it here: http://kashyapc.com/2014/10/07/libvirt-blockcommit-shorten-disk-image-chain-by-live-merging-the-current-active-disk-content/
In your specific case, I’m afraid, you just have to manually edit the VM:
and replace ‘hermes.hermes-snap1’ with the original disk.
Also, here’s a related resource (discussion by Eric Blake from from archives of libvirt-list) : http://wiki.libvirt.org/page/I_created_an_external_snapshot,_but_libvirt_won%27t_let_me_delete_or_revert_to_it
Hope that helps.
How to DELETE external snapshot AT ALL ???
No blockcommit, no blockcopy, no anything else. HOW TO DELETE!!! the fucking snapshot from VM ???
[First off, tone down that goddamn language, will you? Responding only because it’s a genuine question.]
There is no “let me blindly click a button” way to delete an external snapshot. For now.
However, you can do it this clunky way:
(1) Edit the libvirt guest XML by pointing to the right disk image/snapshot file:
virsh edit vm1
(2) Then, tell libvirt to forget about the external snapshot file (that you don’t need) by cleaning up the relevant metadata:
virsh snapshot-delete vm snap1 --metadata
Also, if you have patience, look up libvirt-users list archives. It has a treasure trove of information.
Thanks for the detailed posts and some great explanation on the external and internal snapshots. Came in very handy. Highly recommended page :).