Yes, I really could not think of a better title for this. 🙂

Recently I added a new drive, connected by USB, to a test server. It was for sharing with a virtual machine running on that server.

Chain of oops

When I connected the drive to the host, I decided to create a volume group and logical volume on it. This would have been fine, were it not that I then attempted to mount this logical volume in the host, rather than the guest.  The problem, as I later discovered, was that I’d created a volume group in the host, with the same name as a volume group in the guest.  Again, this would have been fine on its own, but the chain of errors was complete when I made the following next moves:

  • Shared the physical disk with the virtual machine
  • Activated the LVM volume group on the physical disk inside the virtual machine’s operating system (debian)

To its credit, LVM seemed to handle this quite well, and (if memory serves) merged my new logical volumes into one great volume group.

Identity crisis

Switching between different “hosts” (physical host and virtual guest) to edit logical volumes is not very clever, though.  The reason is that lvm, within an operating system with drives available at certain locations, will update those drives’ metadata where each drive is a physical disk that is assigned to a volume group.

Let me put this another way:  if you have /dev/sda1, /dev/sdb1 and /dev/sdc1 all as PVs (physical volumes) that belong to a VG (volume group), then making a change to the volume group – e.g. something simple like vgrename – will affect those three physical volume’s metadata.

Now, once you have finished doing this in the physical host and start playing around with the VG in your guest’s operating system, things aren’t going to look quite right.  If you exported /dev/sdc1 from the host to the guest, the guest might:

  • recognise the device at another location, such as /dev/sdb1
  • look at it with lvm and wonder where /dev/sda1 and /dev/sdc1 are…
  • let you edit this physical volume’s metadata (using LVM tools) in such a way that you cause further problems for yourself back in the physical host, if you planed to share it back there again.

The golden rules for an easy life, so far, are:

  • Don’t share volume groups and logical volumes between hypervisors and guests.
  • You can share a physical disk to a guest, but if you intend to use a logical volume on it within the guest, create it within the guest.  And use it only within the guest.
  • If you must manage and share logical volumes from the host to the guest, use NFS.

If you have already created a nightmare of “unknown” disks and missing PVs…

It’s not too difficult to remedy a tricky situation with messed up volume groups.  If you are absolutely, positively, 100% certain that you no longer require the missing physical volumes in your VG, then there are actually only three (or four) commands you need:

# lvm vgreduce --removemissing -f <volume group name>

This attempts to remove the missing PVs from your VG and writes an up-to-date config file in /etc/lvm/backup/.  If you inspect that file you’ll see a physical_volumes { } stanza which encompasses the physical volumes now remaining in your VG.

However, pay close attention to the status attribute of each PV.   You may also see the remaining entry has:

status = ["MISSING"]

If you then attempt a

# vgchange -ay <volume group name>

you may find that the VG will not become active and this file again contains several entries, related to the missing PVs that you thought you’d just removed.  The reason for this is that the remaining PV(s) have old meta-data which hasn’t been updated by LVM when you did that lvm vgreduce, earlier.  Fear not.

Issue:

# lvm vgextend --restoremissing <volume group name> /path/to/PV

e.g. /path/to/PV might be /dev/sdb1 – you’ll need to know this, no guesses 🙂

After this, inspect the backup config file that you looked at previously.  The status should have changed from “missing” to “allocatable”.  However, the missing PVs will still be present, so let’s get rid of them:

# lvm vgreduce --removemissing -f <volume group name>

Now take one more look at that config file.  It should just contain the PVs that are present and assigned to your VG.

Try activating the VG:

# vgchange -ay <volume group name>

1 logical volume(s) in volume group "<volume group name>" now active

If you’ve got this far, you’re basically on the home straight.  Simply mount the LV on the file system and away you go!

I recently upgraded my laptop hard drive and decided to move all the virtual disk files of my virtual machines to my home directory.

However, when trying to run the VM, an error notification appeared:

Error starting domain: internal error process exited while connecting to monitor: Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.
kvm: -drive file=/home/sd/libvirt/images/WinXPsp3IE8-d3.qcow2,if=none,id=drive-ide0-0-0,format=raw,cache=writeback: could not open disk image /home/sd/libvirt/images/WinXPsp3IE8-d3.qcow2: Permission denied

The Details section of that dialog showed me where the error was occurring:

Traceback (most recent call last):
 File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
 callback(asyncjob, *args, **kwargs)
 File "/usr/share/virt-manager/virtManager/asyncjob.py", line 66, in tmpcb
 callback(*args, **kwargs)
 File "/usr/share/virt-manager/virtManager/domain.py", line 1114, in startup
 self._backend.create()
 File "/usr/lib/python2.7/dist-packages/libvirt.py", line 620, in create
 if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error process exited while connecting to monitor: Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.
kvm: -drive file=/home/sd/libvirt/images/WinXPsp3IE8-d3.qcow2,if=none,id=drive-ide0-0-0,format=raw,cache=writeback: could not open disk image /home/sd/libvirt/images/WinXPsp3IE8-d3.qcow2: Permission denied

 

… or, at least, that’s what I hoped.  Except it didn’t.

For a long time, I played around with permissions on the virtual disk image itself, the directory containing it, and further back/up until reaching ~.  None of it helped.

Then I stumbled upon this libvirt bug report.  Comment #6 by Cole Robinson was what I needed:

“What virt-manager typically offers to do is use ACLs to allow the ‘qemu’ user search permissions on your home dir, which is all it should need and is fairly safe and restrictive.”

In order to check and set this, you’ll need to use the File Access Control utilities – getfacl and setfacl:

# cd /home

My home is “sd”

# getfacl sd

# file: sd
# owner: sd
# group: sd
user::rwx
user:root:--x
user:www-data:r-x
group::r-x
group:www-data:r-x
mask::r-x
other::---

The reason I have www-data with read and execute permissions is that I do web development and testing, and I also keep all my web-dev files in ~ too.  This just makes my system more “portable”, safer to upgrade and/or easier to migrate to a different Linux.

To set the required permission for libvirt / qemu, you just issue this one liner:

# setfacl -m u:libvirt-qemu:r-x sd

.. substituting sd for your own ~ directory name.

setfacl (set file access control) takes three main arguments:

  • the action – in this case, -m means “modify” the ACL;
  • the data to apply, colon-separated: here we specify it’s a user (u) who is libvirt-qemu, and the permissions we want to allow are read and execute (r-x).
  • finally, we specify which files or folders ACL should be modified – in this case, my home (sd).

After this, my virtual machine runs up perfectly.

This is relevant for Crunchbang and other Debian-related distros.  For Fedora/CentOS, I believe the user should be qemu.

The problem: you cannot boot a paravirtualised machine from a CD-ROM for the purposes of installing a virtual machine. You may also be on a wireless link set up by NetworkManager and WLAN0 isn’t a bridged interface.

Here’s the solution:

    1. Download the ISO of your favourite distro and burn to DVD, then mount on your machine (this will probably happen just by inserting the disc on your drive).  If a window opens in your desktop, highlight the path in the address bar and copy it to the system clipboard (CTRL-C).
    2. Install Apache and start the apache/httpd service
    3. In /var/www/html (/var/www on debian, I believe) simply create a symbolic link to the directory where the DVD is mounted.  In this example, I am using CentOS:
       #  ln -s /media/CentOS_5.4_Final/ centos
    4. Now create the virtual machine, by starting up virt-manager, ensuring that it can connect to Dom0 and select New…
    5. In the Installation Source section of the virtual machine creation dialog,  specify the following parameter: Installation media URL: http://localhost/centos (the path to the installer repository)
    6. In the “type of network” selection, select Virtual Interface.
    7. Click through the rest of the set up – but BEFORE YOU COMPLETE IT, GET READY TO PAUSE THE VM. The virtual machine will start up automatically when you finish the set-up steps.
    8. As soon as you start the VM, the initial bootstrapping files should load and the distribution’s kernel should start up.   Only when the console window opens should you pause it!
    9. If you are using CentOS, you now need to modify the configuration file that’s been created, following these steps:
      1. Download the Xen kernel and initial ramdisk from here: http://mirror.centos.org/centos/5/os/x86_64/images/xen/ (change the path if you’re using an i386 host)
      2. Save them somewhere sensible: I made /var/lib/xen/boot and put them in there.
      3. Un-pause and Shutdown the virtual machine.
      4. Modify the config file, to include the paths to the xen-aware kernel and initrd (put these entries at the top, adjusting for your path as necessary):
        kernel = "/var/lib/xen/boot/vmlinuz"
        ramdisk = "/var/lib/xen/boot/initrd.img"
        
        

         

      5. IMPORTANT – also comment out the line for pygrub, so:#bootloader = “/usr/bin/pygrub”
      6. Save the config and run the virtual machine. Nearly there!  Now open up the console to the virtual machine…
    1. If you are prompted for a network address or DHCP, try DHCP.
    2. If you are prompted for an installation path, stick to http. In a network interface dialog that may appear, choose a manual address that doesn’t conflict with other hosts on your real network (but make sure it’s valid for your network!!)
    3. Because the VM now has a virtual network interface, http://localhost/centos is a meaningless path.  If the installer identifies this and prompts for an alternative path to the stage2.img file [true in CentOS, at least], then do the following on your host (real) machine:
      # ifconfig wlan0(substistute eth0 for wlan0 if you’re using a wired ethernet connection)
    4. Paste/type the IP address from the output of ifconfig into the path dialog of the halted installer, but keep the /centos/ directory.
    5. The installer should then run through the rest of the motions and voila – a paravirtualized virtual machine installed from local CD/DVD-ROM.

      When the installer has finished running, uncomment the pygrub line in the config file.

      If you spot any errors with this process, please let me know so I can correct the procedure.

      Happy Christmas!   *<(:-##