Gentle warning: Very long post. (I tried to segregate w.r.t to talks. Maybe you can skip the ones that doesn’t interest you :) )
I was fortunate to attend the FOSDEM conference for the first time in the frozen city of Brussels, Belgium. For those unfamiliar w/ FOSDEM, the conference is held for 2 days(Saturday, Sunday), completely volunteer organized involving free and open source software (w/o any space for commercial talks). I believe this is the first FOSDEM where we had more than 25 talks from RH folks. I’ll try to outline the two days of event and the talks I managed to attend.
My day started off w/ attending FOSDEM welcome keynote. Some statistics from the welcome note — 4 keynotes ; 6 main tracks; 35 stands ; 25 rooms ; 7 main track talks ; 428 scheduled events ; 418 speakers ; 31 lightening talks ; 361 Devroom talks.
This kind of choice may overwhelm people thinking: “Oh, I want to attend this session, but I also want to attend that and the other other one, which may happen concurrently”. But I guess, people anyway realize the physical impossibility and stick to a couple of dev rooms or so.
== Day 1 ==
OpenStack News: Last year retrospective
This was in ‘Virtualization and Cloud’ dev room. Thierry Carrez(Release Manager for the Openstack project), gave an overview of how the project evolved over the past year, no. of contributors, components involved and what’s coming ahead. He briefly talked about the main projects of openstack:
– ‘Nova'(also called OpenStack Compute) — the central part of an IaaS system provides an interface to the virtualization software installed on the host via web
– ‘Swift'(OpenStack Object Storage) — a scalable storage system ;
– ‘Glance'(OpenStack Image Service)– which retrieves the disk images.
– ‘KeyStone’ (OpenStack Identity) — To provide unified authentication across projects.
It appears there has been an almost near exponential rise in the contributions since last year(given the no. of companies involved and many individual contributors). He also discussed about OpenStack ‘Horizon’ project which is the dashboard providing web interface to OpenStack services(noted above.)
Common Criteria Certification of Open Source Software
I then walked into the ‘Hardware Security and Cryptography’ dev room, where Tomas Gustavson(PrimeKey, CTO) started discussing about Common Criteria and open source software. He talked about the process and procedures involved. He also related how several CC documents are linked together. And then outlined the pains involved: time, money, technical-level, only a specific version is certified, to keep track of all minute details and documents and their linkings. However he concedes it is important as the certification assures that the certified product works as it is /documented/ that it shall work. He then moves on to the intricacies involved with Open Source and CC. He also mentioned Red Hat, IBM, PrimeKey as the ones who provide open source certified products. Concluding, however tedious the process maybe the end result is satisfying and provides confidence to governments, federal agencies, major banks and related customers and deploying such software.
I missed to attend Richard W Jones talk on ‘libguestfs’ as I had a colliding talk of mine during the same slot.
Overview of Dogtag Certificate System
After the Common Criteria talk, I gave my brief talk about Dogtag Certificate System to an audience of 50-60 people in the 100 capacity dev room. I started off w/ different subsystems involved, some configuration overview, and a couple of deployment scenarios possible. Then, talked a little about cloning of subsystems for high availability, different security mechanisms available, and some command-line tools. After that, I discussed upcoming plans about REST based design, more tighter integration of subsystems w/ freeIPA project and the refactoring work in progress. Then I briefly showed a small demo(the talk was for 25-30 mins, so I didn’t manage to do a proper demo in time) of pre-installed subsystems(CA, KRA, OCSP) and the web interface on a virtual machine on my laptop. I was a little nervous while presenting, however, I also got a few questions.
Unfortunately, I couldn’t get to meet Kai Engert(upstream/RH Mozilla-nss maintainer) who organized the Dev Room. He actually sent me an email to meet up on Saturday night(1st day of conf.), but I wasn’t able to check it in time(as I didn’t use the (prohibitively expensive) internet at the hotel). Also, he was also handling other talks in Mozilla dev rooms.
I then moved to the ‘Hypervisors’ track.
Ganeti:(A look inside the Virtualization Cluster Management system)
Guido Trotter(of Google) dicussed about their project ‘Ganeti' to manage clusters of physical machines which run virtualization software using commodity hardware. It supports both XEN and KVM hypervisors. Live-migration appears to be one of its critical feature. Guido started with some terminology, components involved, configurations possible, and roles of virt nodes and some customizations that could be done. He also talked about storage management and replication.
I was wondering why there was a real pressing need for Google to start yet-another new management layer project for virtual machines(let it be clusters or something else). As there are already many existing management projects catering to several virtualization use-cases.
Virtualization with KVM: bottom to top, past to future
Paolo Bonzini(of Red Hat) gave a complete overview of the entire virt. stack covering several use-cases relating Desktop, Server and Cloud Virtualization. Starting lower the stack with KVM hypervisor’s entry into linux kernel and its integration with QEMU project. From there, he moved up the stack discussing about Libvirt for management, it’s features, and several libvirt APIs available for other applications to use. And then, to desktop virtualization management software like virt-manager, and the more recent ‘Gnome-Boxes'(more on this below) and several other virt-tools for disk manipulation. Moving on, he discussed about large scale virtualization problems and available solutions(oVirt, OpenStack, Ganeti) and did some comparison of these technologies. He concluded with a roadmap for KVM, QEMU, Libvirt oVirt node, oVirt engine projects.
Linux Containers and OpenVZ
Kir Kolyshkin(OpenVZ project maintainer) started off by introducing the concept of Linux Containers which deals with Operating system-level virtualization which is different from whole-system(or full machine) virtualization like QEMU/KVM. Which means, with containers, there is only one real hardware(no virtual hardware to deal with) ; a single kernel and many user space instances. Container technology is primarily used by hosting providers for deploying web applications. Another alternative technology(which Red Hat supports and actively contributes) is LXC(Linux Containers). As there is no overhead of a hypervisor, higher density is possible with Linux Containers. Each container has it’s own files(chroot ; process tree ; n/w ; devices ; IPC objects). He discussed about some OpenVZ features how it compares with LXC, and also discussed about dynamic resource allocation using ‘cgroups’ technology. And mentioned some of the tools and other new features/related-projects upcoming in OpenVZ.
– vzctl – A tool to control OpenVZ containers.
– VSwap: A new approach to memory management. Which requires only two parms to configure – RAM and Swap
– ploop: A reimplimentation of linux loop device. Which supports – ‘plain’ raw, qcow2 ; supports n/w storage, Snapshots and fast provisioning via stacked images.
– CRIU: (Chekcpoint/Restore(mostly) In User-space) — http://criu.org
But, LXC has been gaining more and more traction as it doesn’t require a ‘patched’ kernel(which OpenVZ needs) to work with containers. But OpenVZ appear to have more deployments since it’s been around a little longer.
Native Linux KVM Tool(NLKT)
Sasha Levin(NLKT developer) introduced NLKT project. A feather light weight (in-kernel) user-space alternative to QEMU, written from scratch for managing KVM hypervisor based guests(linux only at the moment). This project sits inside the kernel tree under /tools directory. It was originally born out of a long(100 emails + thread discussion, initiated by Ingo Molnar) on upstream kvm list as an RFC about unifying kvm user-space(qemu) and kernel-space into a single project as it is a single experience to the end user. After lots of heated discussions, QEMU/KVM maintainers and contributors had their own different reservations and no consensus was reached. NLKT already works project is still in development phase, there are several active contributors. It supports very minimal legacy devices(for simplicity and maintenance’s sake) which are only required for booting. Also to note, it doesn’t support the plenty of architectures that QEMU supports. He also outlined about upcoming features.
NLKT is submitted for inclusion into mainline kernel (but not yet accepted). What this means, if it is merged, a Linux distro. will by default get a minimal user-space tool to boot linux guests.
Having said that, QEMU is light years ahead with thousands of man hours spent developing and testing, supports plenty of enterprise features, and a wide deployment base it already has.
(I experimented w/ NLKT a couple of times out of curiosity (during free time) to see how this works and learn a different perspective of KVM.)
OpenStack developers meeting and Distribution panel
I was still hanging around the ‘Virtualization and Cloud’ Devroom, so I joined this last talk of the day just to observe how things progress. Thierry Carrez and a couple of other Openstack contributors moderated this session attended by 40-50 folks which included people representing different distributions and upstream projects. The discussions mostly surrounded around concerns of distributions, governance model and further improvements. From my observation, I don’t think there was any concrete consensus about any of the topics. There were a few Red Hat engineers discussing about the work Red Hat is doing, while the moderator was more keen on hearing a clear idea of what is Red Hat’s stance on OpenStack, and other surrounding areas relating to budget for openstack conferences.
That ends Day1.
== Day 2 ==
USB redirection over network
I came half-way into this talk by Hans de Goede(of Red Hat). His talk was primarily about USB redirection as in using the usb devices(which are plugged into the physical machine) inside a virtual machine. I missed the part where he talked about the special case where the USB device being redirected is not on the physical machine but to a machine located elsewhere and how that device is accessed over the network inside a guest.
He also gave a small demo of USB redirection where he plugged a mouse into the physical machine and was able to use it inside the virtual machine.
I had also briefly attended a talk on ‘Tool kits and Wayland’ a discussion about next generation display manager providing much smoother user experiences presented by Rob Bradford(of Intel) in ‘CrossDesktop Devroom’.
GNOME Boxes, use other systems with ease
In the ‘CrossDesktop Devroom’, Zeeshan Ali and Marc-André Lureau(of Red Hat) talked about ‘Gnome Boxes', a desktop virtualization software which is integrated into Gnome-3. ‘Boxes’ use Libvirt under the hood. While virt-manager is a separate application which needs to be invoked as a separate application. A super-quick demo was also provided by Zeeshan
For more info, refer to Daniel P Berrange’s post on this and future of virt-manager.[9.1]
Virtualization Management the oVirt way
Itamar Heim(of Red Hat) presented a high level overview of oVirt project, which targets large scale virtualization/Data Ceter management software leveraging many of existing virtualization technologies(KVM based). He started by discussing the goals of building a community around the virt. stack and a little bit about governance model. Then he went over the life cycle of virtual machine management using oVirt interface using screen-shots. And discussed several management features available for live migration, system scheduling, power/image management, monitoring, etc. He the showed a high level architecture(which shows IPA as a component). Then he briefly discussed about ‘Hooks’ which can modify a VM definition as desired, but, just before a VM start. Some example hooks he mentioned are:
– CPU Pinning
– Single Root I/O Virtualization (SR/IOV) — which gives the ability to provide performance benefit similar to assigning a physical PCI device(like a n/w port) to a guest.
– Smart Card
– Hugepages (related to memory)
– Numa (Non-uniform memory access)
He also outlined several upcoming features: live snapshots, live storage migration, hot plug, multiple storage domains, shared disks, iscisi disk, shared file system support,storage array integration, Gluster support, libguestfs integration…
oVirt – Engine Core
Omer Frenkel(of Red Hat) discussed about oVirt ‘Engine Core' which is the central part of oVirt platform which provides administration interfaces. He talked about other responsibilities of Engine Core and several internal details. Then he discussed about Authentication, where user management is done via LDAP servers, and kerberos auth to LDAP servers. And mentioned about IPA/AD as it’s current support. He concluded with some administration detail and road map.
VDSM — the oVirt node management agent
(VDSM: Virtual Desktop and Server Management Daemon)
Federico Simoncelli(of Red Hat) discussed about VDSM, a high level API for managing the cluster nodes which was originally tailored for needs of oVirt. It is written in Python; multi-threaded and multi-processed. He outlined some responsibilities of VDSM, It is used to dynamically manage anything from a few VMs on a single host to 1000s of VMs on a cluster of 100s of hosts using multiple storage targets. He concluded discussing about Storage Architecture and Thin Provisioning.
He offered Red Hat swag for audience who asked questions.
Buiding app. sandboxes on top of LXC and KVM w/ libvirt
Daniel P. Berrange(of Red Hat) gave an excellent talk on building sandboxes on top of LXC and KVM using libvirt to an almost full crowd of 500. He started off with differentiating DAC and MAC access control mechanisms and then discussed the idea of ‘Application Sandboxes’ where the goal is to isolate any kind of regular applications, thus providing multiple defense layers. Before going further, he clarified ‘selinux sandbox’ from ‘libvirt sandbox’ which his talk was about. He then talked about start-up mechanisms for different libvirt drivers(KVM, LXC) and their performance overheads (of cpu execution, start-up/shutdown penalties, device access). Then he discussed about some real life use cases where sandboxing can be applied:
– Deploying multiple Apache Virtual hosts (and providing strong isolation) ;
– Audio transcoding of an obtained ‘ogg’ from an untrusted source and converting it into ‘raw’ in a sandboxed environment thus avoiding file-system and n/w access ;
– Running browser instances in a sandboxed environment(one for banking, one for general use, etc..)
– mock RPM build (chroot is installed using ‘rpm’ in a sandbox, where malicious %post/%pre scripts can escape the sandboxed env.)
He discussed it in a bit more detail with some examples of virt-sandbox command on his blog[13.1]
That’s it for talks.
After that, I headed to the Fedora stand and did some booth duty, and answered(politely) a couple of questions(hey, why isn’t Fedora nice to me, and when can we expect to see this bug fixed) and handed over some swag to folks, then we dismantled the booth and headed out for dinner into the chill.
After dinner, myself, Tom Callaway(Fedora Engineering Manager), Jonathan Blandford(Gnome Desktop Manager), Gnome ‘Boxes’ team, and a couple of other community members went to watch SuperBowl(American football) at a place called “Fat Boy’s”(probably it could be named better). Though I don’t follow the game at all, I supported the NY Giants because a character I read in a book likes it. And I was diligently warned by Tom Callaway(a Patriots supporter) that I could, but I may get hurt :) . I left exactly at half-time and walked back to the hotel as it was already 02:00 AM and I couldn’t keep myself awake despite having 2 cups of strong tea. Later I was told NY Giants won.
This is the first FOSDEM I attended. I felt it was a great conference(minus the the cold wave) where diverse set of groups converge at one place.
I also had a chance to meet some Red Hatters, and several community members(though just very briefly given the tight schedule of the conference) : Daniel P Berrange, Richard W Jones, Paolo Bonzini, Tom Callaway, Lennart Poettering, Zeeshan Ali, Marc-André Lureau. And a couple of others working on Aeolus/DeltaCloud projects Michal Fojtik, Francesco Vollera, Marios( working on DeltaCloud, Aeolus projects), Sasha Levin, Pekka Enberg, Christopher Wickert, Jeorg Simon, Bert Desmet, Thorsten Leemhuis, Jonathan Blandford, Jeron Van Meun and many others.
I tried to review the post thrice over. Please forgive if there are any grammatical errors.
Some pictures: http://www.flickr.com/photos/kashyapchamarthy/