Edward’s Notes

Technical topics and photos.

Vagrant-libvirt on Gentoo Linux

This post covers setting up a vagrant to use the libvirt plugin to run hardware accelerated virtual machines under Gentoo Linux. We will explaint how to create machine images for running on vagrant. I will cover setting up and using ChefDK and test-kitchen in a later post.

Check for KVM support

Before we get started setting up software we should verify that the hardware and operating system are setup to support KVM.

CPU KVM support

The CPU needs to support hardware accelerated virtualization. You can check for support by looking in /proc/cpuinfo for the vmx flag (or the svm flag for AMD CPU’s).

$ egrep '(vmx|svm)' --color=always /proc/cpuinfo
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl **vmx** est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm dtherm tpr_shadow vnmi flexpriority

Kernel KVM support

CPU specific KVM support must be enabled in the kernel. You can check if your kernel supports KVM by greping for the CPU’s specific KVM options in the kernels config file. If you kernel is compiled with support for displaying the config file in /proc/config.gz you can search there, otherwise find the kernel version in /proc/version and fine the appropriate kernel configuration file in /boot/config-[KERNEL_VERSION] (you may have to mount /boot first)

$ zegrep '^CONFIG_KVM_(INTEL|AMD)' /proc/config.gz
# CONFIG_KVM_AMD is not set

If your kernel does not support KVM you will need to recompile it with the appropriate CPU specific KVM option. This can be found in the Virtualization menu. If the “Virtualization” menu does not include the “Kernel-based Virtual Machine (KVM)” menu check that high resolution timers are enabled in “General setup” > “Timers subsystem” > “High Resolution Timer Support”. High resolution timers are often disabled for virtual box images.

Enable Nested KVM support1

In order to run a KVM accelerated virtual machines inside a KVM accelerated virtual machine you need to setup support for nested hardware acceleration. If you have compiled KVM into the kernel as shown above you will need to add kvm-intel.nested=1 (or kvm-amd.nested=1) to the kernel command line to enable KVM nesting. To do this in grub2 you would add the parameter to the end of GRUB_CMD_LINE_LINUX_DEFULT.

# Append parameters to the linux kernel command line for non-recovery entries
GRUB_CMDLINE_LINUX_DEFAULT="... kvm-intel.nested=1"

If KVM is installed as a module you need to add the options as follows.


You will then need to update the grub configuration.

$ mount /boot
$ grub2-config -o /boot/grub/grub.cfg

and reboot the system for the setting to come into effect.

Libvirt setup

Add the archetectures you wish to support to your /etc/portage/make.conf file. The configuration bellow will only enable x86_64 based virtual machines to run.

$ echo 'QEMU_USER_TARGETS="x86_64"' >> /etc/portage/make.conf

Add the following use flags and emerge app-emulation/qemu

$ echo "app-emulation/qemu numa python usb spice" \
  > /etc/portage/package.use/app-emulation-qemu
$ emerge app-emulation/qemu

Add the following use flags and emerge app-emulation/virt-manager as follows.

$ pushd /etc/portage/package.use/
$ echo "net-dns/dnsmasq script" > net-dns-dnsmasq
$ echo "dev-libs/libxml2 python" > dev-libs-libxml2
$ echo "net-misc/spice-gtk python usbredir -gstreamer" > net-misc-spice-gtk
$ echo "net-libs/gtk-vnc python" > net-libs-gtk-vnc
$ echo "app-emulation/libvirt-glib python" > app-emulation-libvirt-glib
$ echo "app-emulation/libvirt firewalld numa virt-network" > app-emulation-libvirt
$ popd
$ emerge app-emulation/virt-manager

We now need to give your regular user permissions to connect to libvirt. We will use polkit to give non-root users access to libvirt. To do this we need to create a libvirt group and add your user to it as follows.

$ groupadd libvirt
$ gpasswd -a yourlogin libvirt

Next we create a policy file to give the libvirt group permissions to manage libvirt.

/etc/polkit-1/rules.d/10-virt.rules (10-virt.rules) download
/* -*- mode: js; js-indent-level: 4; indent-tabs-mode: nil -*- */

// allow access to libvirt for users in libvirt group

polkit.addRule(function(action, subject) {
    if (action.id == "org.libvirt.unix.manage"
            && subject.local
            && subject.active
            && subject.isInGroup("libvirt")) {
        return polkit.Result.YES;

You may need to reboot at this time for the policy file to come into effect. You will receive an error like the following if your user can’t be authenticated2.

Unable to connect to libvirt.

authentication failed: polkit: polkit6retains_authorization_after_challenge=1
Authorization requires authentication but no agent is available.

Libvirt URI is: qemu:///system

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/connection.py", line 1020, in _open_thread
  File "/usr/share/virt-manager/virtinst/connection.py", line 158, in open
  File "/usr/lib/python2.7/site-packages/libvirt.py", line 105, in openAuth
    if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: authentication failed: polkit: polkit6retains_authorization_after_challenge=1
Authorization requires authentication but no agent is available.

Finally we will start and enable the libvirt services.

$ /etc/init.d/libvirtd start
$ /etc/init.d/virtlockd start
$ rc-update add libvirtd default
$ rc-update add virtlockd default

Setup vagrant

Vagrant is used to provision virtual machine instances. It comes with a virtual box provider but we will be configuring the libvirt provider to give us access to KVM.

If you haven’t already, you will need to install layman3, then add my chef overlay4 by copying the overlay description to /etc/layman/overlays/, then sync laymen and add the emiddleton-chef-overlay repository

$ curl https://raw.githubusercontent.com/emiddleton/chef-overlay/master/overlay.xml \
  > /etc/layman/overlays/emiddleton-chef.xml
$ layman -L
$ layman -a emiddleton-chef-overlay

Unmask and emerge vagrant-bin.

$ echo "=app-emulation/vagrant-bin-1.6.5" \
  > /etc/portage/package.keywords/app-emulation-vagrant-bin-1.6.5
$ emerge -uav vagrant-bin

In your regular user account install the vagrant-libvirt5 plugin which gives vagrant access to KVM based virtual servers through the libvirt api.

$ vagrant plugin install vagrant-libvirt

Finally we will install arp-sk which is used by fog6 to find out which ip address a virtual servers is listening on.

$ emerge net-analyzer/arp-sk

We then configure fog to use libvirt as its default provider by adding the following in your regular users account.

  :libvirt_uri: "qemu:///system"
  :libvirt_ip_command: "/sbin/arp -an |grep -i $mac|cut -d '(' -f 2 | cut -d ')' -f 1"

Setup Veewee

The final step is to create machine images to run on vagrant-libvirt. We will be using veewee for this. Veewee is a written in ruby and can be used to create virtualbox and libvirt vagrant boxes for a wide range of operating systems7.

To install veewee you will need to unmask its dependencies and install it as follows.

$ echo "dev-ruby/fog libvirt" > /etc/portage/package.use/dev-ruby-fog
$ pushd /etc/portage/package.keywords/
$ echo "=virtual/rubygems-7" > virtual-rubygems-7
$ echo "=dev-lang/ruby-2.1.5" > dev-lang-ruby-2.1.5
$ echo "=dev-ruby/i18n-0.6.11" > dev-ruby-i18n-0.6.11
$ echo "=dev-ruby/json-1.8.1" > dev-ruby-json-1.8.1
$ echo "=dev-ruby/rake-10.4.2" > dev-ruby-rake-10.4.2
$ echo "=dev-ruby/thor-0.19.1" > dev-ruby-thor-0.19.1
$ echo "=dev-ruby/rdoc-4.1.2" > dev-ruby-rdoc-4.1.2
$ echo "=dev-ruby/net-ssh-2.9.0" > dev-ruby-net-ssh-2.9.0
$ echo "=dev-ruby/rake-compiler-0.9.3" > dev-ruby-rake-compiler-0.9.3
$ echo "=virtual/ruby-ssl-4" > virtual-ruby-ssl-4
$ echo "=dev-ruby/ffi-1.9.6" > dev-ruby-ffi-1.9.6
$ echo "=dev-ruby/ansi-1.4.3" > dev-ruby-ansi-1.4.3
$ echo "=dev-ruby/childprocess-0.5.5" > dev-ruby-childprocess-0.5.5
$ echo "=dev-ruby/net-scp-1.2.1" > dev-ruby-net-scp-1.2.1
$ echo "=dev-ruby/rubygems-2.4.4" > dev-ruby-rubygems-2.4.4
$ echo "=dev-ruby/builder-3.2.2" > dev-ruby-builder-3.2.2
$ echo "=dev-ruby/racc-1.4.12" > dev-ruby-racc-1.4.12
$ echo "=dev-ruby/mime-types-2.4.3" > dev-ruby-mime-types-2.4.3
$ echo "=dev-ruby/hoe-3.13.0-r1" > dev-ruby-hoe-3.13.0-r1
$ echo "=dev-ruby/nokogiri-1.6.5-r1" > dev-ruby-nokogiri-1.6.5-r1
$ echo "=dev-ruby/multi_json-1.10.1" > dev-ruby-multi_json-1.10.1
$ echo "=dev-ruby/rexical-1.0.5-r3" > dev-ruby-rexical-1.0.5-r3
$ echo "=dev-ruby/trollop-2.0" > dev-ruby-trollop-2.0
$ echo "=dev-ruby/posix-spawn-0.3.9" > dev-ruby-posix-spawn-0.3.9
$ echo "=app-emulation/veewee-" > app-emulation-veewee-
$ emerge app-emulation/veewee

Veewee requires a default image pool8 to be defined before it can create libvirt images. To do this we create a directory to store the images,

$ mkdir -p /var/lib/libvirt/images

then create a pool definition file,

/tmp/default-pool.xml (default-pool.xml) download
<pool type="dir">
    <format type='qcow2'/>

and load it with the libvirt shell.

$ virsh pool-create default-pool.xml

Create a vagrant-libvirt box image.

Now we have veewee configured and installed its time to create some libvirt vagrant boxs. To create vagrant boxes we start by defining the image template. (NOTE: we are using a patched version of veewee that accepts github https urls)

$ veewee kvm define centos-64 \

This will create a directory ./definition/centos-64 containing files used to configure the image. You can now build the box with the build command. (NOTE: this can take a while depending on your system and network)

$ veewee kvm build centos-64

When the build is completed we can export the kvm image as vagrant box.

$ veewee kvm export centos-64

This will result in the creation of a file centos-64.box that can be used by vargant-libvirt.

Running Vagrant

Finally lets test the new box in vagrant. To do this we start by creating a Vagrantfile in the current working directory.

Vagrant.configure("2") do |c|
  c.vm.box = "opscode-centos-64"
  c.vm.box_url = "file:///full/path/to/vagrant/box/centos-64.box"
  c.vm.hostname = "testbox.vagrantup.com"
  c.vm.synced_folder ".", "/vagrant", disabled: true
  c.vm.provider :libvirt do |p|

You will need to set the c.vm.box_url to point to the full path of your newly created vagrant box. To start the instance run

$ vagrant up

You can connect to the newly created instance with

$ vagrant ssh

When you have finished playing around with the vagrant instance you can stop it with

$ vagrant halt

and remove the instance with

$ vagrant destroy