libvirt: default network conflicts (not anymore)

Increasingly there’s a need for libvirt networking to work inside a virtual machine that is already running on the default network (192.168.122.0/24). The immediate practical case where this comes up is while testing nested virtualization: start a guest (L1) with default libvirt networking, and if you need to install libvirt again on it to run a (nested) guest (L2), there’ll be routing conflict because of the existing default route — 192.168.122.0/24. Up until now, I tried to avoid this by creating a new libvirt network with a different IP range (or manually edit the default libvirt network).

To alleviate this routing conflict, Laine Stump (libvirt developer) now pushed a patch (with a tiny follow up) to upstream libvirt git. (Relevant libvirt bug with discussion.)

I ended up testing the patch last night, it works well.

Assuming your physical host (L0) has the default libvirt network route:

$ ip route show | grep virbr
192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1

Now, start a guest (L1) and when you install libvirt (which has the said fix) on it, it notices the existing route of 192.168.122.0/24 and creates the default network on the next free network range (starting its search with 192.168.124.0/24), thus avoiding the routing conflict.

 $ ip route show
  default via 192.168.122.1 dev ens2  proto static  metric 1024 
  192.168.122.0/24 dev ens2  proto kernel  scope link  src 192.168.122.62 
  192.168.124.0/24 dev virbr0  proto kernel  scope link  src 192.168.124.1

Relevant snippet of the default libvirt network (you can notice the new network range):

  $ virsh net-dumpxml default | grep "ip address" -A4
    <ip address='192.168.124.1' netmask='255.255.255.0'>
      <dhcp>
        <range start='192.168.124.2' end='192.168.124.254'/>
      </dhcp>
    </ip>

So, please test it (build RPMs locally from git master or should be available in the next upstream libvirt release, early October) for your use cases and report bugs, if any.

[Update: On Fedora, this fix is available from version libvirt-1.2.8-2.fc21 onwards.]

Advertisement

3 Comments

Filed under Uncategorized

Live disk migration with libvirt blockcopy

[08-JAN-2015 Update: Correct the blockcopy CLI and update the final step to re-use the copy to be consistent with the scenario outlined at the beginning. Corrections pointed out by Gary R Cook at the end of the comments.]
[17-NOV-2014 Update: With recent libvirt/QEMU improvements, another way (which is relatively faster) to take a live disk backup via libvirt blockcommit, here’s an example]

QEMU and libvirt projects has had a lot of block layer improvements in its last few releases (libvirt 1.2.6 & QEMU 2.1). This post discusses a method to do live disk storage migration with libvirt’s blockcopy.

Context on libvirt blockcopy
Simply put, blockcopy facilitates virtual machine live disk image copying (or mirroring) — primarily useful for different use cases of storage migration:

  • Live disk storage migration
  • Live backup of a disk image and its associated backing chain
  • Efficient non-shared storage migration (with a combination of virsh operations snapshort-create-as+blockcopy+blockcommit)
  • As of IceHouse release, OpenStack Nova project also uses a variation of libvirt blockcopy, through its Python API virDomainBlockRebase, to create live snapshots, nova image-create. (More details on this in an upcoming blog post).

A blockcopy operation has two phases: (a) All of source disk content is copied (or mirrored) to the destination, this operation can be canceled to revert to the source disk (b) Once libvirt gets a signal indicating source and destination content are equal, the mirroring job remains awake until an explicit call to virsh blockjob [. . .] --abort is issued to end the mirroring operation gracefully . If desired, this explicit call to abort can be avoided by supplying --finish option. virsh manual page for verbose details.

Scenario: Live disk storage migration

To illustrate a simple case of live disk storage migration, we’ll use a disk image chain of depth 2:

base <-- snap1 <-- snap2 (Live QEMU) 

Once live blockcopy is complete, the resulting status of disk image chain ends up as below:

base <-- snap1 <-- snap2
          ^
          |
          '------- copy (Live QEMU, pivoted)

I.e. once the operation finishes, ‘copy’ will share the backing file chain of ‘snap1’ and ‘base’. And, live QEMU is now pivoted to use the ‘copy’.

Prepare disk images, backing chain & define the libvirt guest

[For simplicity, all virtual machine disks are QCOW2 images.]

Create the base image:

 $ qemu-img create -f qcow2 base 1G

Edit the base disk image using guestfish, create a partition, make a file-system, add a file to the base image so that we distinguish its contents from its qcow2 overlay disk images:

$ guestfish -a base.qcow2 
[. . .]
><fs> run 
><fs> part-disk /dev/sda mbr
><fs> mkfs ext4 /dev/sda1
><fs> mount /dev/sda1 /
><fs> touch /foo
><fs> ls /
foo
><fs> exit

Create another QCOW2 overlay snapshot ‘snap1’, with backing file as ‘base’:

$ qemu-img create -f qcow2 -b base.qcow2 \
  -o backing_fmt=qcow2 snap1.qcow2

Add a file to snap1.qcow2:

$ guestfish -a snap1.qcow2 
[. . .]
><fs> run
><fs> part-disk /dev/sda mbr
><fs> mkfs ext4 /dev/sda1
><fs> mount /dev/sda1 /
><fs> touch /bar
><fs> ls /
bar
baz
foo
lost+found
><fs> exit

Create another QCOW2 overlay snapshot ‘snap2’, with backing file as ‘snap1’:

$ qemu-img create -f qcow2 -b snap1.qcow2 \
  -o backing_fmt=qcow2 snap2.qcow2

Add another test file ‘baz’ into snap2.qcow2 using guestfish (refer to previous examples above) to distinguish contents of base, snap1 and snap2.

Create a simple libvirt XML file as below, with source file pointing to snap2.qcow2 — which will be the active block device (i.e. it tracks all new guest writes):

$ cat <<EOF > /etc/libvirt/qemu/testvm.xml
<domain type='kvm'>
  <name>testvm</name>
  <memory unit='MiB'>512</memory>   
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64'>hvm</type>
  </os>
  <devices>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/export/vmimages/snap2.qcow2'/>
      <target dev='vda' bus='virtio'/>
    </disk>   
  </devices>
</domain>
EOF

Define the guest and start it:

$ virsh define etc/libvirt/qemu/testvm.xml
  Domain testvm defined from /etc/libvirt/qemu/testvm.xml
$ virsh start testvm
Domain testvm started

Perform live disk migration
Undefine the running libvirt guest to make it transient[*]:

$ virsh dumpxml --inactive testvm > /var/tmp/testvm.xml
$ virsh undefine testvm

Check what is the current block device before performing live disk migration:

$ virsh domblklist testvm
Target     Source
------------------------------------------------
vda        /export/vmimages/snap2.qcow2

Optionally, display the backing chain of snap2.qcow2:

$ qemu-img info --backing-chain /export/vmimages/snap2.qcow2
[. . .] # Output removed for brevity

Initiate blockcopy (live disk mirroring):

$ virsh blockcopy --domain testvm vda \
  /export/blockcopy-test/backups/copy.qcow2 \
  --wait --verbose --shallow \
  --pivot

Details of the above command: It creates copy.qcow2 file in the specified path; performs a --shallow blockcopy (i.e. the ‘copy’ shares the backing chain) of the current block device (vda); –pivot will pivot the live QEMU to the ‘copy’.

Confirm that QEMU has pivoted to the ‘copy’ by enumerating the current block device in use:

$ virsh domblklist testvm
Target     Source
------------------------------------------------
vda        /export/vmimages/copy.qcow2

Again, display the backing chain of ‘copy’, it should be the resultant chain as noted in the Scenario section above).

$ qemu-img info --backing-chain /export/vmimages/copy.qcow2

Enumerate the contents of copy.qcow2:

$ guestfish -a copy.qcow2 
[. . .]
><fs> run
><fs> mount /dev/sda1 /
><fs> ls /
bar
foo
baz
lost+found
><fs> quit

(You can notice above: all the content from base.qcow2, snap1.qcow2, and snap2.qcow2 mirrored into copy.qcow2.)

Edit the libvirt guest XML to use the copy.qcow2, and define it:

$ virsh edit testvm
# Replace the <source file='/export/vmimages/snap2.qcow2'/> 
# with <source file='/export/vmimages/copy.qcow2'/>
[. . .] 

$ virsh define /var/tmp/testvm.xml

[*] Reason for the undefining and defining the guest again: As of writing this, QEMU has to support persistent dirty bitmap — this enables us to restart a QEMU process with disk mirroring intact. There are some in-progress patches upstream for a while. Until they are in main line QEMU, the current approach (as illustrated above) is: make a running libvirt guest transient temporarily, perform live blockcopy, and make the guest persistent again. (Thanks to Eric Blake, one of libvirt project’s principal developers, for this detail.)

16 Comments

Filed under Uncategorized

On bug reporting. . .

[This has been sitting my drafts for a while, time to hit send. Long time ago I wrote something like this to a Red Hat internal mailing list, thought I’ll put it out here too.]

A week ago, I was participating in one of the regular upstream OpenStack bug triage sessions (for Nova project) to keep up with the insane flow of bugs. All open source projects are grateful for reported bugs. But more often than not, a good chunk of bugs in such large projects end up being bereft of details.

Writing a good bug report is hard. Writing a coherent, reproducible bug report is much harder (and takes more time). Especially when involved in complex cloud projects like OpentStack where you may have to care about everything from Kernel to user space.

A few things to consider including in a bug report. The following are all dead obvious points (and are usually part of a high-level template in issue trackers like Bugzilla). That said, however many times you repeat, it’s never enough:

  • Summary. A concise summary — is it a bug? an RFE? a tracker-bug?
  • Description. A clear description of the issue and its symptoms in chronological order.
  • Version details. e.g. Havana? IceHouse?, hypervisor details; API versions, anything else that’s relevant.
  • Crystal clear details to reproduce the bug. Bonus points for a reproducer script.
  • Test environment details. (This is crucial.)
    • Most of “cloud” software testing is dearly dependent on test environment. Clearer the details, fewer the round-trips between developers, test engineers, packagers, triagers, etc.
    • Note down if any special hardware is needed for testing, e.g. an exotic NAS, etc.
    • If you altered anything — config files, especially code located in /usr/lib/pythonx.x/site-packages/, or any other dependent lower-layer project configurations (e.g. libvirt’s config file), please say so.
  • Actual results. Post the exact, unedited details of what happens when the bug in question is triggered.
  • Expected results. Describe what is the desired behavior.
  • Additional investigative details (where appropriate).
    • If you’ve done a lot of digging into an issue, writing a detailed summary (even better: a blog post) while its fresh in memory is very useful. Along with addition info like – configuration settings, caveats, relevant log fragment, _stderr_ of a script, or a command being executed, adding trace details — all of which would be useful for archival purposes (and years later, the context would come in very handy)

Why?

  • Useful for new test engineers who do not have all the context of a bug.
  • Useful for documentation writers to help them write correct errata text/release notes.
  • Useful for non-technical folks reading the bugs/RFEs. Clear information saves a heck of a lot of time.
  • Useful for folks like product and program managers who’re always not in the trenches.
  • Useful for downstream support organizations.
  • Should there be a regression years later, having all the info to test/reproduce in the bug, right there makes your day!
  • Reduces needless round-trips of NEEDINFO.
  • Useful for new users referring to these bugs in a different context.

Overall, a very fine bug report reading experience.

You get the drift!

4 Comments

Filed under Uncategorized

Upcoming: OpenStack workshop at Fedora Flock conference, Prague (Aug 6-9, 2014)

UPDATE: I’ll not be able to make it to Prague due to plenty of other things going on. However, Jakub Ružička, an OpenStack developer and RDO project packager kindly agreed (thank you!) to step in to do the talk/workshop.

Fedora Project’s yearly flagship conference Flock is in Prauge (August 6-9, 2014) this time. A proposal I submitted for an OpenStack workshop is accepted. The aim workshop is to do a live setup of a minimal, 2-node OpenStack environment using virtual machines (also involves KVM-based nested virtualization). This workshop also aims to help understand some of the OpenStack Neutron networking components and related debugging techniques.

Slightly more verbose details are in the abstract. An etherpad for capturing notes/planning and remote participation.

Leave a comment

Filed under Uncategorized

Create a minimal Fedora Rawhide guest

Create a Fedora 20 guest from scratch using virt-builder , fully update it, and install a couple of packages:

$ virt-builder fedora-20 -o rawhide.qcow2 --format qcow2 \
  --update --selinux-relabel --size 40G\
  --install "fedora-release-rawhide yum-utils"

[UPDATE: Starting Fedora-21, fedora-release-rawhide package is renamed by fedora-repos-rawhide.]

Import the disk image into libvirt:

$ virt-install --name rawhide --ram 4096 --disk \
  path=/home/kashyapc/rawhide.qcow2,format=qcow2,cache=none \
  --import

Login via serial console into the guest, upgrade to Rawhide:

$ yum-config-manager --disable fedora updates updates-testing
$ yum-config-manager --enable rawhide
$ yum --releasever=rawhide distro-sync --nogpgcheck
$ reboot

Optionally, create a QCOW2 internal snapshot (live or offline) of the guest:

$ virsh snapshot-create-as rawhide snap1 \
  "Clean Rawhide 19MAR2014"

Here are a couple of methods on how to upgrade to Rawhide

Leave a comment

Filed under Uncategorized

Notes for building KVM-based virtualization components from upstream git

I frequently need to have latest KVM, QEMU, libvirt and libguestfs while testing with OpenStack RDO. I either build from upstream git master branch or from Fedora Rawhide (mostly this suffices). Below I describe the exact sequence I try to build from git. These instructions are available in some form in the README files of the said packages, just noting them here explicitly for convenience. My primary development/test environment is Fedora, but it should be similar on other distributions. (Maybe I should just script it all.)

Build KVM from git

I think it’s worth noting the distinction (from traditional master branch) of these KVM git branches: remotes/origin/queue and remotes/origin/next. queue and next branches are same most of the time with the distinction that KVM queue is the branch where patches are usually tested before moving them to the KVM next branch. And, commits from next branch are submitted (as a PULL request) to Linus during the next Kernel merge window. (I recall this from an old conversation with Gleb Natapov (thank you), one of the previous KVM maintainers on IRC).

# Clone the repo
$ git clone \
  git://git.kernel.org/pub/scm/virt/kvm/kvm.git

# To test out of tree patches,
# it's cleaner to do in a new branch
$ git checkout -b test_branch

# Make a config file
$ make defconfig

# Compile
$ make -j4 && make bzImage && make modules

# Install
$ sudo -i
$ make modules_install && make install

Build QEMU from git

To build QEMU (only x86_64 target) from its git:

# Install buid dependencies of QEMU
$ yum-builddep qemu

# Clone the repo
$ git clone git://git.qemu.org/qemu.git

# Create a build directory to isolate source directory 
# from build directory
$ mkdir -p ~/build/qemu && cd ~/build/qemu

# Run the configure script
$ ~/src/qemu/./configure --target-list=x86_64-softmmu \
  --disable-werror --enable-debug 

# Compile
$ make -j4

I previously discussed about QEMU building here.

Build libvirt from git

To build libvirt from its upstream git:

# Install build dependencies of libvirt
$ yum-builddep libvirt

# Clone the libvirt repo
$ git clone git://libvirt.org/libvirt.git && cd libvirt

# Create a build directory to isolate source directory
# from build directory
$ mkdir -p ~/build/libvirt && cd ~/build/libvirt

# Run the autogen script
$ ../src/libvirt/autogen.sh

# Compile
$ make -j4

# Run tests
$ make check

# Invoke libvirt programs without having to install them
$ ./run tools/virsh [. . .]

[Or, prepare RPMs and install them]

# Make RPMs (assumes Fedora `rpmbuild` setup
# is properly configured)
$ make rpm

# Install/update
$ yum update *.rpm

Build libguestfs from git
To build libguestfs from its upstream git:

# Install build dependencies of libvirt
$ yum-builddep libguestfs

# Clone the libguestfs repo
$ git clone git://github.com/libguestfs/libguestfs.git \
   && cd libguestfs

# Run the autogen script
$ ./autogen.sh

# Compile
$ make -j4

# Run tests
$ make check

# Invoke libguestfs programs without having to install them
$ ./run guestfish [. . .]

If you’d rather prefer libguestfs to use the custom QEMU built from git (as noted above), QEMU wrappers are useful in this case.

Alternate to building from upstream git, if you’d prefer to build the above components locally from Fedora master here are some instructions .

1 Comment

Filed under Uncategorized

Create a minimal, automated Btrfs Fedora guest template with Oz

I needed to create an automated Btrfs guest for a test. A trivial virt-install based automated script couldn’t complete the guest install (on a Fedora Rawhide host) — it hung perpetually once it tries to retrieve .treeinfo, vmlinuz, initrd.img. On my cursory investigation with guestfish, I couldn’t pin-point the root cause. Filed a bug here

[Edit, 31JAN2014: The above is fixed by adding –console=pty to virt-install command-line; refer the above bug for discussion.]

In case someone wants to reproduce with versions I noted in the above bug:

  $ wget http://kashyapc.fedorapeople.org/virt/create-guest-qcow2-btrfs.bash
  $ chmod +x create-guest-qcow2-btrfs.bash
  $ ./create-guest-qcow2-btrfs.bash fed-btrfs2 f20

Oz
So, I resorted to Oz – a utility to create automated installs of various Linux distributions. I previously wrote about it here

Below I describe a way to use it to create a minimal, automated Btrfs Fedora 20 guest template.

Install it:

 $ yum install oz -y 

A minimal Kickstart file with Btrfs partitioning:

$ cat Fedora20-btrfs.auto
install
text
keyboard us
lang en_US.UTF-8
network --device eth0 --bootproto dhcp
rootpw fedora
firewall --enabled ssh
selinux --enforcing
timezone --utc America/New_York
bootloader --location=mbr --append="console=tty0 console=ttyS0,115200"
zerombr
clearpart --all --drives=vda
autopart --type=btrfs
reboot

%packages
@core
%end

A TDL (Template Description Language) file that Oz needs as input:

$ cat f20.tdl
<template>
  <name>f20btrfs</name>
  <os>
    <name>Fedora</name>
    <version>20</version>
    <arch>x86_64</arch>
    <install type='url'>
      <url>http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/</url>
    </install>
    <rootpw>fedora</rootpw>
  </os>
  <description>Fedora 20</description>
</template>

Invoke oz-install (supply the Kickstart & TDL files, turn on debugging to level 4 — all details):

$ oz-install -a Fedora20-btrfs.auto -d 4 f20.tdl 
[. . .]
INFO:oz.Guest.FedoraGuest:Cleaning up after install
Libvirt XML was written to f20btrfsJan_29_2014-11:38:44

Define the above Libvirt guest XML, start it over a serial console:

$  virsh define f20btrfsJan_29_2014-11:38:44
Domain f20btrfs defined from f20btrfsJan_29_2014-11:38:44

$ virsh start f20btrfs --console
[. . .]
Connected to domain f20btrfs
Escape character is ^]

Fedora release 20 (Heisenbug)
Kernel 3.11.10-301.fc20.x86_64 on an x86_64 (ttyS0)

localhost login: 

It’s automated, fine, but still slightly more tedious (TDL file looks redundant at this point) to create custom guest image templates.

Leave a comment

Filed under Uncategorized

Capturing x86 CPU diagnostics

Sometime ago I learnt, from Paolo Bonzini (upstream KVM maintainer), about this little debugging utility – x86info (written by Dave Jones) which captures detailed information about CPU diagnostics — TLB, cache sizes, CPU feature flags, model-specific registers, etc. Take a look at its man page for specifics.

Install:

 $  yum install x86info -y 

Run it and capture the output in a file:

 $  x86info -a 2>&1 | tee stdout-x86info.txt  

As part of debugging KVM-based nested virtualization issues, here I captured x86info of L0 (bare metal, Intel Haswell), L1 (guest hypervisor), L2 (nested guest, running on L1).

Leave a comment

Filed under Uncategorized

virt-builder, to trivially create various Linux distribution guest images

I frequently use virt-builder (part of libguestfs-tools package) as part of my work flow.

Rich has extensively documented it, still I felt it’s worth pointing out again of its sheer simplicity.

For instance, if you need to create a Fedora 20 guest of size 100G, and of qcow2 format, it’s as trivial as (no need for root login):

$ virt-builder fedora-20 --format qcow2 --size 100G
[   1.0] Downloading: http://libguestfs.org/download/builder/fedora-20.xz
#######################################################################  100.0%
[ 131.0] Planning how to build this image
[ 131.0] Uncompressing
[ 139.0] Resizing (using virt-resize) to expand the disk to 100.0G
[ 220.0] Opening the new disk
[ 225.0] Setting a random seed
[ 225.0] Setting random root password [did you mean to use --root-password?]
Setting random password of root to N4KkQjZTgdfjjqJJ
[ 225.0] Finishing off
Output: fedora-20.qcow2
Output size: 100.0G
Output format: qcow2
Total usable space: 97.7G
      Free space: 97.0G (99%)

Then, import the just created image:

$ virt-install --name guest-hyp --ram 8192 --vcpus=4 \
  --disk path=/home/test/vmimages/fedora-20.qcow2,format=qcow2,cache=none \
  --import

It provides a serial console for login.

You could also create several other distribution variants – Debian, etc

Leave a comment

Filed under Uncategorized

Script to create Neutron tenant networks

In my two node OpenStack setup (RDO on Fedora 20), I often have to create multiple Neutron tenant networks (here you can read a more on what’s a tenant network) for various testing purposes.

To alleviate this manual process, a trivial script that’ll create a new Neutron tenant network after you provide a few positional parameters in an existing OpenStack setup. This assumes there’s a working OpenStack setup with Neutron configured. I tested this on Neutron + OVS + GRE. This should work with any other Neutron plugins, as tenant networks is a Neutron concept (and not specific to plugins).

Usage:

$ ./create-new-tenant-network.sh                \
                    TENANTNAME USERNAME         \
                    SUBNETSPACE ROUTERNAME      \ 
                    PRIVNETNAME PRIVSUBNETNAME  \

To create a new tenant network with 14.0.0.0/24 subnet:

$ ./create-new-tenant-network.sh \
  demoten1 tuser1                \
  14.0.0.0 trouter1              \
  priv-net1 priv-subnet1

The script does the below, in that order:

  1. Creates a Keystone tenant called demoten1.
  2. Then, a Keystone user called tuser1 and associates it to the
    demoten1.
  3. Creates a Keystone RC file for the user (tuser1) and sources it.
  4. Creates a new private network called priv-net1.
  5. Creates a new private subnet called priv-subnet1 on priv-net1.
  6. Creates a router called trouter1.
  7. Associates the router (trouter1 in this case) to an existing external network (the script assumes it’s called as ext) by setting it as its gateway.
  8. Associates the private network interface (priv-net1) to the router (trouter1).
  9. Adds Neutron security group rules for this test tenant (demoten1) for ICMP and SSH.

To test if it’s all working, try booting a new Nova guest in the tenant network, and it should aquire an IP address from 14.0.0.0/24 subnet.

Posting the relevant part of the script:

[. . .]
# Source the admin credentials
source keystonerc_admin


# Positional parameters
tenantname=$1
username=$2
subnetspace=$3
routername=$4
privnetname=$5
privsubnetname=$6


# Create a tenant, user and associate a role/tenant to it.
keystone tenant-create       \
         --name $tenantname
 
keystone user-create         \
         --name $username    \
         --pass fedora

keystone user-role-add       \
         --user $username    \
         --role user         \
         --tenant $tenantname

# Create an RC file for this user and source the credentials
cat >> keystonerc_$username<<EOF
export OS_USERNAME=$username
export OS_TENANT_NAME=$tenantname
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://localhost:5000/v2.0/
export PS1='[\u@\h \W(keystone_$username)]\$ '
EOF


# Source this user credentials
source keystonerc_$username


# Create new private network, subnet for this user tenant
neutron net-create $privnetname

neutron subnet-create $privnetname \
        $subnetspace/24            \
        --name $privsubnetname     \


# Create a router
neutron router-create $routername


# Associate the router to the external network 
# by setting its gateway.
# NOTE: This assumes, the external network name is 'ext'
EXT_NET=$(neutron net-list     \
| grep ext | awk '{print $2;}')

PRIV_NET=$(neutron subnet-list \
| grep $privsubnetname | awk '{print $2;}')

ROUTER_ID=$(neutron router-list \
| grep $routername | awk '{print $2;}')

neutron router-gateway-set  \
        $ROUTER_ID $EXT_NET \

neutron router-interface-add \
        $ROUTER_ID $PRIV_NET \


# Add Neutron security groups for this test tenant
neutron security-group-rule-create   \
        --protocol icmp              \
        --direction ingress          \
        --remote-ip-prefix 0.0.0.0/0 \
        default

neutron security-group-rule-create   \
        --protocol tcp               \
        --port-range-min 22          \
        --port-range-max 22          \
        --direction ingress          \
        --remote-ip-prefix 0.0.0.0/0 \
        default

NOTE: As shell script is executed in a sub-process (of the parent shell), you won’t notice the keystone sourcing of the newly created user. (You can notice it in the stdout of the script in debug mode.)

If it’s helpful for someone, here’s my Neutron configurations & iptables rules for a two node setup with Neutron + OVS + GRE:

1 Comment

Filed under Uncategorized