Tag Archives: fedora

Minimal DevStack with OpenStack Neutron networking

This post discusses a way to setup minimal DevStack (OpenStack development environment from git sources) with Neutron networking, in a virtual machine.

(a) Setup minimal DevStack environment in a VM

Prepare VM, take a snapshot

Assuming you have a Linux (Fedora 21 or above or any of Debian variants) virtual machine setup (with at-least 8GB memory and 40GB of disk space), take a quick snapshot. The below creates a QCOW2 internal snapshot (that means, your disk image should be a QCOW2 image), you can invoke it live or offline:

 $ virsh snapshot-create-as devstack-vm cleanslate

So that if something goes wrong, you can revert to this clean state, by simple doing:

 $ virsh snapshot-revert devstack-vm cleanslate

Setup DevStack

There’s plenty of configuration variants to set up DevStack. The upstream documentation has its own recommendation of minimal configuration. The below configuration is of much smaller foot-print, which configures only: Nova (Compute, Scheduler, API and Conductor services), Keystone Neutron and Glance (Image service) services.

$ mkdir -p $HOME/sources/cloud
$ git clone https://git.openstack.org/openstack-dev/devstack
$ chmod go+rx $HOME
$ cat << EOF > local.conf
[[local|localrc]]
DEST=$HOME/sources/cloud
DATA_DIR=$DEST/data
SERVICE_DIR=$DEST/status
SCREEN_LOGDIR=$DATA_DIR/logs/
LOGFILE=/$DATA_DIR/logs/devstacklog.txt
VERBOSE=True
USE_SCREEN=True
LOG_COLOR=True
RABBIT_PASSWORD=secret
MYSQL_PASSWORD=secret
SERVICE_TOKEN=secret
SERVICE_PASSWORD=secret
ADMIN_PASSWORD=secret
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
SERVICE_HOST=127.0.0.1
NETWORK_GATEWAY=10.1.0.1
FIXED_RANGE=10.1.0.0/24
FIXED_NETWORK_SIZE=256
FORCE_CONFIG_DRIVE=always
VIRT_DRIVER=libvirt
# To use nested KVM, un-comment the below line
# LIBVIRT_TYPE=kvm
IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img"
# If you have `dnf` package manager, use it to improve speedups in DevStack build/tear down
export YUM=dnf

NOTE: If you’re using KVM-based virtualization under the hood, refer this upstream documentation on setting it up with DevStack, so that the VMs in your OpenStack cloud (i.e. Nova instances) can run, relatively, faster than with plain QEMU emulation. So, if you have the relevant hardware, you might want setup that before proceeding further.

Invoke the install script:

 $ ./stack.sh 

[27MAR2015 Update]: Don’t forget to systemctl enable the below services so they start on reboot — this allows you to successfully start all OpenStack services when you reboot your DevStack VM:

 $ systemctl enable openvswitch mariadb rabbitmq-server 

(b) Configure Neutron networking

Once DevStack installation completes successfully, let’s setup Neutron networking.

Set Neutron router and add security group rules

(1) Source the user tenant (‘demo’ user) credentials:

 $ . openrc demo

(2) Enumerate Neutron security group rules:

$ neutron security-group-list

(3) Create a couple of environment variables, for convenience, capturing the IDs of Neutron public, private networks and router:

$ PUB_NET=$(neutron net-list | grep public | awk '{print $2;}')
$ PRIV_NET=$(neutron net-list | grep private | awk '{print $2;}')
$ ROUTER_ID=$(neutron router-list | grep router1 | awk '{print $2;}')

(4) Set the Neutron gateway for router:

$ neutron router-gateway-set $ROUTER_ID $PUB_NET

(5) Add security group rules to enable ping and ssh:

$ neutron security-group-rule-create --protocol icmp \
    --direction ingress --remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create --protocol tcp  \
    --port-range-min 22 --port-range-max 22 --direction ingress default

Boot a Nova instance

Source the ‘demo’ user’s Keystone credentials, add a Nova key pair, and boot an ‘m1.small’ flavored CirrOS instance:

$ . openrc demo
$ nova keypair-add oskey1 > oskey1.priv
$ chmod 600 oskey1.priv
$ nova boot --image cirros-0.3.3-x86_64-disk \
    --nic net-id=$PRIV_NET --flavor m1.small \
    --key_name oskey1 cirrvm1 --security_groups default

Create a floating IP and assign it to the Nova instance

The below sequence of commands enumerate the Nova instance, finds the Neutron port ID for a specific instance. Then, creates a floating IP, associates it to the Nova instance. Then, again, enumerates the Nova instances, so you can notice both floating and fixed IPs for it:

 
$ nova list
$ neutron port-list --device-id $NOVA-INSTANCE-UUID
$ neutron floatingip-create public
$ neutron floatingip-associate $FLOATING-IP-UUID $PORT-ID-OF-NOVA-INSTANCE
$ nova list

A new tenant network creation can be trivially done with a script like this.

Optionally, test the networking setup by trying to ping or ssh into the CirrOS Nova instance.


Given the procedural nature of the above, all of this can be trivially scripted to suit your needs — in fact, upstream OpenStack Infrastructure does use such automated DevStack environments to gate (as part of CI) every change that is submitted to any of the various OpenStack projects.

Finally, to find out how minimal it really is, one way to test is to check the memory footprint inside the DevStack VM, using the ps_mem tool and compare that with a Different DevStack environment with more OpenStack services enabled. Edit: A quick memory profile in a Minimal DevStack environment here — 1.3GB of memory without any Nova instances running (but, with OpenStack services running).

Advertisement

2 Comments

Filed under Uncategorized

Cubietruck: QEMU, KVM and Fedora

Rich Jones previoulsy wrote here on how he got KVM working on Cubietruck — it was Fedora-19 timeframe. It wasn’t quite straight forwad then: you had to build a custom Kernel, custom U-Boot (Universal-Boot), etc.

Recently I got a Cubietruck for testing. First thing I wanted to try was to boot a Fedora-21 KVM guest. It worked just fine — ensure to supply -cpu host parameter to QEMU invocation (Rich actually mentions this at the end of his post linked above). Why? CubieTruck has a Cortex-A7 processor, for which QEMU doesn’t have specific support, so you’d need need to use the same CPU as the host to boot a KVM guest. Peter Maydell, one of the QEMU upstream maintainers, explains a little more here on the why (a Kernel limitation).

Below is what I ended up doing to successfully boot a Fedora 21 guest on Cubietruck.

  • I downloaded the Fedora 21-Beta ARM image and wrote it to a microSD card. (I used this procedure.)
  • The above Fedora ARM image didn’t have KVM module. Josh Boyer on IRC said I might need a (L)PAE ARM Kernel. Installing it (3.17.4-302.fc21.armv7hl+lpae) did it — this is built with KVM module enabled.
  • Resized the root filesystem of Fedora 21 on the microSD card. (Procedure I used.)
  • Then, I tried a quick test to boot a KVM guest with libguestfs-test-tool. But the guest boot failed with:
    “kvm_init_vcpu failed: Invalid argument”. So, I filed a libguestfs bug to track this. After a bit of trial and error on IRC, I made a small edit to libguestfs source so that the KVM appliance is created with -cpu host. Rich suggested I sumbit this as a patch — which resulted in this libguestfs commit (resolving the bug I filed).
  • With this in place, a KVM guest boots successfully. (Complete output of boot via libguestfs appliance is here.) Also, I tested importing a Cirros-0.3.3 disk image into libvirt and run it succesfully.


[Trivia: Somehow the USB to TTL CP2102 serial converter cable I bought didn’t show boot (nor any other) messages when I accessed it via UART (with ‘ screen /dev/ttyUSB0 115200‘). Fortunately, the regular cable that was shipped with the Cubietruck worked just fine. Still, I’d like to find what’s up with the CP2012 cable, since it was deteced as a device in my systemd logs. ]

3 Comments

Filed under Uncategorized

Upcoming: OpenStack workshop at Fedora Flock conference, Prague (Aug 6-9, 2014)

UPDATE: I’ll not be able to make it to Prague due to plenty of other things going on. However, Jakub Ružička, an OpenStack developer and RDO project packager kindly agreed (thank you!) to step in to do the talk/workshop.

Fedora Project’s yearly flagship conference Flock is in Prauge (August 6-9, 2014) this time. A proposal I submitted for an OpenStack workshop is accepted. The aim workshop is to do a live setup of a minimal, 2-node OpenStack environment using virtual machines (also involves KVM-based nested virtualization). This workshop also aims to help understand some of the OpenStack Neutron networking components and related debugging techniques.

Slightly more verbose details are in the abstract. An etherpad for capturing notes/planning and remote participation.

Leave a comment

Filed under Uncategorized

Create a minimal Fedora Rawhide guest

Create a Fedora 20 guest from scratch using virt-builder , fully update it, and install a couple of packages:

$ virt-builder fedora-20 -o rawhide.qcow2 --format qcow2 \
  --update --selinux-relabel --size 40G\
  --install "fedora-release-rawhide yum-utils"

[UPDATE: Starting Fedora-21, fedora-release-rawhide package is renamed by fedora-repos-rawhide.]

Import the disk image into libvirt:

$ virt-install --name rawhide --ram 4096 --disk \
  path=/home/kashyapc/rawhide.qcow2,format=qcow2,cache=none \
  --import

Login via serial console into the guest, upgrade to Rawhide:

$ yum-config-manager --disable fedora updates updates-testing
$ yum-config-manager --enable rawhide
$ yum --releasever=rawhide distro-sync --nogpgcheck
$ reboot

Optionally, create a QCOW2 internal snapshot (live or offline) of the guest:

$ virsh snapshot-create-as rawhide snap1 \
  "Clean Rawhide 19MAR2014"

Here are a couple of methods on how to upgrade to Rawhide

Leave a comment

Filed under Uncategorized

Capturing x86 CPU diagnostics

Sometime ago I learnt, from Paolo Bonzini (upstream KVM maintainer), about this little debugging utility – x86info (written by Dave Jones) which captures detailed information about CPU diagnostics — TLB, cache sizes, CPU feature flags, model-specific registers, etc. Take a look at its man page for specifics.

Install:

 $  yum install x86info -y 

Run it and capture the output in a file:

 $  x86info -a 2>&1 | tee stdout-x86info.txt  

As part of debugging KVM-based nested virtualization issues, here I captured x86info of L0 (bare metal, Intel Haswell), L1 (guest hypervisor), L2 (nested guest, running on L1).

Leave a comment

Filed under Uncategorized

Neutron configs for a two-node OpenStack Havana setup (on Fedora-20)

I managed to prepare a two-node OpenStack Havana setup, hand-configured (URL to notes below). Here are some Neutron configurations that worked for me.

Setup details:

  • Two Fedora 20 minimal (@core) virtual machines to run the Controller & Compute nodes.
  • Services on Controller node: Keystone, Cinder, Glance, Neutron, Nova. Neutron networking is setup with OpenvSwitch plugin, network namespaces, GRE tunneling.
  • Services on Compute node: Nova (openstack-nova-compute service), Neutron (neutron-openvswitch-agent), libvirtd, OpenvSwitch.
  • Both nodes are manually configured. Notes is here.

Configurations

OpenvSwitch plugin configuration — /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini — on Controller node:

$ cat plugin.ini | grep -v ^$ | grep -v ^#
[ovs]
[agent]
[securitygroup]
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.122.163
[DATABASE]
sql_connection = mysql://neutron:fedora@vm01-controller/ovs_neutron
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Neutron configuration — /etc/neutron/neutron.conf:

$ cat neutron.conf | grep -v ^$ | grep -v ^#
[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = localhost
auth_strategy = keystone
ovs_use_veth = True
allow_overlapping_ips = True
qpid_port = 5672
[quotas]
quota_network = 20
quota_subnet = 20
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_host = 192.168.122.163
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
[database]
[service_providers]

Neutron L3 agent configuration — /etc/neutron/neutron.conf:

$ cat l3_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
ovs_use_veth = True
use_namespaces = True
metadata_ip = 192.168.122.163
metadata_port = 8700

Neutron metadata agent — /etc/neutron/metadata_agent.ini:

$ cat metadata_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
auth_url = http://192.168.122.163:35357/v2.0/
auth_region = regionOne
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
nova_metadata_ip = 192.168.122.163
nova_metadata_port = 8700
metadata_proxy_shared_secret = fedora

iptables rules on Controller node:

$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001 cinder incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 horizon incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001 glance incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5000,35357 -m comment --comment "001 keystone incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001 mariadb incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001 novncproxy incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment "001 novaapi incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001 neutron incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001 qpid incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -p gre -j ACCEPT 
-A OUTPUT -p gre -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

iptables rules on Compute node:

$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

OpenvSwitch database contents:

$ ovs-vsctl show
6f5d0e33-7013-4816-bc97-29af9abe8309
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "tap63ea2815-b5"
            tag: 1
            Interface "tap63ea2815-b5"
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port "tape7110dba-a9"
            Interface "tape7110dba-a9"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, local_ip="192.168.122.163", out_key=flow, remote_ip="192.168.122.100"}
    ovs_version: "2.0.0"

NOTE: I SCPed the Neutron configurations neutron.conf and OpenvSwitch plugin plugin.ini from Controller to Compute node (don’t miss to replace local_ip attribute appropriately — I made that mistake).

A couple of non-deterministic issues I’m still investigating on a new setup with a non-default libvirt network as external network (on my current setup I used libvirt’s default subnet (192.168.x.x.). Lars pointed out that could probably be the cause of some of the routing issues):

  • Sporadic loss of networking for Nova guests. This got resolved (at-least partially) when I invoke VNC of the guest (via SSH tunneling) & do some basic diagnostics, networking comes up just fine in the guests again (GRE tunnels go stale?).tcmpdump analysis on various network devices (tunnels/bridges/tap devices) on both Controller & Compute nodes in progress.
  • Nova guests fail to acquire DHCP leases (I can clearly observe this, when I explicitly do an ifdown eth0 && ifup eth0 from VNC of the guest. Neutron DHCP agent seems to be flaky here.

TIP: On Fedora, openstack-utils package(from version: openstack-utils-2013.2-2.fc21.noarch) includes a neat utility called openstack-service which allows to trivially control OpenStack services. This makes life much easier, thanks to Lars!

Leave a comment

Filed under Uncategorized

Fedora 20 Virtualization Test Day — 8OCT2013

Cole Robinson announced Virtualization test day for Fedora 20.

For convenience, here’s what’s needed to get started.

And, as usual — tests can be performed any day from now and the wiki can be updated with your results/observations. Just that on the test day, more virtualization developers will be available to answer questions, etc on IRC (#fedora-test-day on Freenode).

Leave a comment

Filed under Uncategorized

FLOCK 2013, Retrospective

Heya,

FLOCK just concluded last week. Given the very short time-frame, the conference was very well organized! (I know what pains it takes from first hand experience volunteering to organize FUDCon Pune, couple of years ago). While not undermining others’ efforts, I couldn’t agree more with Spot — “To put it bluntly, anything at Flock that you liked was probably her handiwork.”,about Ruth. Her super-efficiency shined through everything at FLOCK.

I attempted to write while in the middle of a couple of sessions, but I just couldn’t context switch. (For instance, I have a partial draft that’s started off with “I’m currently in the middle of Miloslav Trmač’s discussion about Fedora Revamp…”)

Here’s my (verbose) summary of how I spent my time at FLOCK.

Talks that I have attended

  • Matthew Miller’s discussion of “cloud”, and should Fedora care?: This was a very high level overview of the topic. For me, main takeaway was Fedora’s cloud SIG’s near term goals — more visibility, better documentation.
  • Crystal Ball talk/discussion by Stephen Gallagher which discussed where Fedora is going for the next five years. All the discussion and notes is here.
  • Kernel Bug triage, Live by Dave Jones. Dave walked us through the process of triaging a bug. And also introduced to some scripts he wrote to manage bugzilla workflow, and related triaging aspects.
  • Fedora Revamp by Miloslv Trmac — This was more of a discussion about how to improve various aspects in Fedora. On a broad level, various topics discussed: Making Rawhide more useable, need for more automated tests, etc. Previous mailing list discussion thread is here
  • What’s new with SELinux, by Dan Walsh — Top off my memory, I only recall a couple of things that I recall from this talk where Dan discussed: New confined domains, new permissive domains, sepolicy tool chain, and what’s upcoming (he mentioned a newer coreutils, with upgraded cp, mv, install, mkdir commands which provide -Z flag. Some context is here
  • Secure Linux Containers, by Dan Walsh: This was one of my faviourite sessions. I was interested to learn a bit more about containers. OpenStack heavily uses Network Namespaces to provide Networking, and I thought this session would give some high-level context, and I wasn’t disappointed. Dan discussed several topics: Application Sandboxes, Linux Containers, different types of Linux Namespaces (Mount, UTS, IPC, Network, PID, User), Cgroups. He then went to elaborate on different types of Containers (and their use cases): Generic Application Container, Systemd Application Container, Chroot Application Container, libvirt-lxc, virt-sandbox, virt-sandbox-service, systemd-nspawn.
  • PKI made easy: Ade Lee gave an overview of PKI, Dogtag and its integration aspects with FreeIPA. I worked with Ade on this project and associated Red Hat products for three about years. It was nice to meet him in person for first time after all these years.
  • Fedora QA Meeting : On Monday (12-AUG),I participated in with Adam Williamson and rest of the Fedora QA team. Video is here. Major topics:
    • ARM release criteria / test matrix adjustments
    • Visible Cloud release criteria / test matrix adjustments.

Among other sessions, I also participated in the “Hack the Future” (of Fedora) with Matthew Miller. I also enjoyed the conference recap discussion with FESCo (Fedora Engineering Steering Committee).

OpenStack Test Event

On day two of FLOCK, I conducted an OpenStack test event. Earlier I blogged about it here. This session wasn’t recorded, as it’s a hands-on test event. We had about 20 participants (capacity of the room was arond 25).

Some notes:

  • Russell Bryant, nova PTL, was in the room, not feeling qualified enough, I made him give a quick 5 minute introduction of OpenStack :-). Later, Jeff Peeler from OpenStack Heat project also gave a brief introduction about Heat and what it does. RDO community manager Rich Bowen was also present and participated in the event.
  • Notes from the test event is here.
  • Russel Bryant (Thank you!) kindly offered to provide temporary access to virtual machines (from RackSpace cloud) for participants who didn’t have enough hardware on their laptop, to quickly test/setup OpenStack. I know of at-least a couple of people who successfully setup using these temporary VM instances.
  • A couple of people hit the bogus “install successfully finished” bug. Clean-up and re-run wasn’t really straightforward in this case.
  • Another participant hit an issue where packstack adds ‘libvirt_type=kvm’ in nova.conf /despite/ the machine not having hardware virtualization extensions. It should ideally add ‘libvirt_type=qemu’, if hardware extensions weren’t found (this should be double checked). And, at-least one person hit Mysql login credential errors (which I sure hit myself on one of my test runs) with an allinone packstack run.

Overall: given the time frame of 2 hours, and the complexity involved with setting up OpenStack, we had decent participation. At-least I know 5-7 people had it configured and running. Thanks to Russel, Jeff Peeler, Sandro Mathys, Rich Bowen for helping and assisting participants during the test event.

TODOs/Notes/Hallway

These are arbitrary discusssions, notes to self, todos, amusing (to me) snippets from hallway conversations. Let’s see what I can recall.

  • I ran into Luke Macken in the hotel lobby, one of the evenings, we briefly talked about virtualization, and he mentioned he tried PCI passthrough of a sound card with KVM/QEMU, and couldn’t get it working. I said, I’ll try and get to him (Note to self: Add this as 198th item on the TODO list).
  • From discussions with Matthew Miller: we need to switch to Oz from Appliance Creator to generate Fedora Cloud images.
  • Try out Ansible’s OpenStack deployer tool.
  • Had an interesting hallway chat with Bill Nottingham, Miloslav Trmac, in the relaxed environment of the Charleston Aquarium (do /not/ miss this mesmerizing place if you’re in Charleston!). Topics: Various Cloud technologies, mailing list etiquette (the much discussed recent LKML epic thread about conflicts, strong language to convey your points, etc.)
  • Books: On day-1 evening dinner, Paul Frields mentioned ‘Forever war’ as one of his favorites and highly recommended it. Next day, during the evening event, at the Mynt bar, Greg DekonigsBerg said, “anything & everything by Jim Collins”. The same evening, while still discussing books, Tom Callaway said “no, that’s not the right book” (or something along the lines) when I mentioned ‘Elements of Style’. I don’t know what was his reasoning was, I liked the book anyway. :-)
  • I learnt interesting details about life in Czech Republic from conversations with Jan Zeleny.
  • A lot of little/long conversations with Adam Williamson (Fedora QA Czar), Robyn Bergeron Ruth Suehle, Christopher Wickert, Rahul Sundaram, Cole Robinson, Toshio Kuratomi, and all others I missed to name here.
  • Thanks Toshio for the delicious Peaches! (He carried that large box of Peaches in the cabin luggage.) Also thank you for the nice conversation on last Tuesday, and taking us to the breakfast place Kitchen, on King’s street.
  • Also, I tried Google Glass from Jon Masters. But it wasn’t quite intuitive, as I have prescription glasses already.

Leave a comment

Filed under Uncategorized

OpenStack test event at Flock — Fedora Contributor’s conference (AUG 9-12)

Firstly, this post should have been coming from Matthias Runge, long time Fedora contributor, OpenStack Horizon developer. Earlier this May, Matthias proposed a FLOCK hack-fest session for OpenStack to test/fix latest packages on Fedora, and it was accepted. Matthias, unfortunately, cannot make it (Duh, he should have been there!) to Charleston due to personal reasons. He trusted I could handle the swap of places with him (I work with him as part of the Cloud Engineering team at Red Hat). Thanks Matthias!

Secondly, it’s not really flash news that FLOCK (Fedora Contributor’s conference), the first edition of the revamped (and now erstwhile) FUDCON, will be taking place in about two weeks (Aug 9-12) in Charleston, South Carolina.

If you care about Open Source Infrastructure as a Service software, and interested in contributing/learning/deploying, you’re more than welcome to participate (needless to say) in this session. There will also be a couple of core OpenStack developers hanging around during the conference!

Some practical information for the OpenStack test event:

  • Abstract: OpenStack FLOCK test event abstract is here.
  • Prerequisites: This is supposed to be a hands on session where we try to setup and test. Given the nature of OpenStack, if you can have a laptop with 4G (or more) memory, and (at-least) 50G of free disk space, it’ll make life easier while setting up OpenStack.
  • Current milestone packages: OpenStack Havana, milestone-2 packages are here.
  • Trunk Packages: These are built from OpenStack upstream trunk on an hourly basis. As of writing this, Neutron (OpenStack networking) server packages are not yet available (coming soon – people are working hard on this!). nova-network is the temporary recommendation. Further details are in the quickstart instructions
  • Bug tracking: If there are OpenStack Fedora/EL packaging, installation & related issues, file it in RH bugzilla. If you’re testing from upstream trunk, it’s better to file them under upstream issue tracker
  • An etherpad instance with more verbose information here. If you have comments/suggestions/notes, please add it there.

Finally, there are plenty of interesting sessions – check out the schedule!

PS: Damn it, Seth Vidal — I was dearly looking forward for your user builds tools talk :( Just noticed Kyle McMartin is doing it on Seth’s behalf.

3 Comments

Filed under Uncategorized

Configuring Libvirt guests with an Open vSwitch bridge

In the context of OpenStack networking, I was trying to explore Open vSwitch. I felt it’s better to go one step back, and try with a pure libvirt guest before I try it with OpenStack networking.

On why Open vSwitch compared to regular Linux bridge?

  • In short (as Thomas Graf, Kernel networking subsystem developer, put it) — Software Defined Networking(SDN)
  • Open vSwitch’s upstream documentation provides a more detailed explanation.

Here’s a simple scenario, where the machine in test has a single physical NIC, obtaining its IP address from DHCP. And, running KVM guests managed via libvirt.

Install Open vSwitch

Install the Open vSwitch package (this is on Fedora 19):

$ yum install openvswitch -y

Enable the openvswitch systemd unit file, and start the daemon:

$ systemctl enable openvswitch.service
$ systemctl start openvswitch.service

Check the status Open vSwitch service, to ensure it’s ‘Active’:

$ systemctl status openvswitch.service

Configure Open vSwitch (OVS) bridge
Before you proceed, ensure to have physical access or access via serial console to the machine, because, associating a physical interface with an Open vSwitch bridge will result in lost connectivity.
Reasoning is here under the ‘Configuration problems’ section.

Add an OVS bridge device:

$ ovs-vsctl add-br ovsbr0

Associate the OVS bridge device to eth0 (or em1). (At this point, network connectivity will be lost.)

$ ovs-vsctl add-port ovsbr0 eth0

I was obtaining IP address to my host from DHCP, so I first cleared it from the physical interface, and associated it with the Open vSwitch bridge device (ovsbr0).

$ ifconfig eth0 0.0.0.0
$ ifconfig ovsbr0 10.xx.yyy.zzz

I killed the existing dhclient instance on ‘eth0’, and initiated it on ovsbr0:

$ dhclient ovsbr0 &

List the OVS database contents

 
$ ovs-vsctl show
    3dc7f3e3-5872-47c0-ba6f-1cb12065f4d0
        Bridge "ovsbr0"
            Port "eth0"
                Interface "eth0"
            Port "ovsbr0"
                Interface "ovsbr0"
                    type: internal
        ovs_version: "1.10.0"

Update libvirt guest’s bridge source

I have an existing KVM guest, managed by libvirt, with its default network source associated with libvirt’s ‘virbr0). Let’s modify its network to Open vSwitch bridge.

Edit the libvirt’s guest XML

$ virsh edit f18vm

The attribute should look as below (take note of the highlighted attributes):

[...]
    <interface type='bridge'>
      <mac address='52:54:00:fb:00:01'/>
      <source bridge='ovsbr0'/>
      <virtualport type='openvswitch'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
[...]

Once the guest XML is edited and saved, dump its contents to stdout, you’ll notice an additional attribute interfaceid added automatically:

    $ virsh dumpxml f18vm | grep bridge -A8
       <interface type='bridge'>
         <mac address='52:54:00:fb:00:01'/>
         <source bridge='ovsbr0'/>
         <virtualport type='openvswitch'>
           <parameters interfaceid='74b6858e-8012-4caa-85c7-b64902a19605'/>
         </virtualport>
         <model type='virtio'/>
         <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
       </interface>
       <serial type='pty'>
         <target port='0'/>

Start the guest, and check if it’s IP address matches the host subnet:

$ virsh start fed18vm --console
$ ifconfig eth0

7 Comments

Filed under Uncategorized