Yes, I really could not think of a better title for this. 🙂
Recently I added a new drive, connected by USB, to a test server. It was for sharing with a virtual machine running on that server.
Chain of oops
When I connected the drive to the host, I decided to create a volume group and logical volume on it. This would have been fine, were it not that I then attempted to mount this logical volume in the host, rather than the guest. The problem, as I later discovered, was that I’d created a volume group in the host, with the same name as a volume group in the guest. Again, this would have been fine on its own, but the chain of errors was complete when I made the following next moves:
- Shared the physical disk with the virtual machine
- Activated the LVM volume group on the physical disk inside the virtual machine’s operating system (debian)
To its credit, LVM seemed to handle this quite well, and (if memory serves) merged my new logical volumes into one great volume group.
Identity crisis
Switching between different “hosts” (physical host and virtual guest) to edit logical volumes is not very clever, though. The reason is that lvm, within an operating system with drives available at certain locations, will update those drives’ metadata where each drive is a physical disk that is assigned to a volume group.
Let me put this another way: if you have /dev/sda1, /dev/sdb1 and /dev/sdc1 all as PVs (physical volumes) that belong to a VG (volume group), then making a change to the volume group – e.g. something simple like vgrename – will affect those three physical volume’s metadata.
Now, once you have finished doing this in the physical host and start playing around with the VG in your guest’s operating system, things aren’t going to look quite right. If you exported /dev/sdc1 from the host to the guest, the guest might:
- recognise the device at another location, such as /dev/sdb1
- look at it with lvm and wonder where /dev/sda1 and /dev/sdc1 are…
- let you edit this physical volume’s metadata (using LVM tools) in such a way that you cause further problems for yourself back in the physical host, if you planed to share it back there again.
The golden rules for an easy life, so far, are:
- Don’t share volume groups and logical volumes between hypervisors and guests.
- You can share a physical disk to a guest, but if you intend to use a logical volume on it within the guest, create it within the guest. And use it only within the guest.
- If you must manage and share logical volumes from the host to the guest, use NFS.
If you have already created a nightmare of “unknown” disks and missing PVs…
It’s not too difficult to remedy a tricky situation with messed up volume groups. If you are absolutely, positively, 100% certain that you no longer require the missing physical volumes in your VG, then there are actually only three (or four) commands you need:
# lvm vgreduce --removemissing -f <volume group name>
This attempts to remove the missing PVs from your VG and writes an up-to-date config file in /etc/lvm/backup/. If you inspect that file you’ll see a physical_volumes { } stanza which encompasses the physical volumes now remaining in your VG.
However, pay close attention to the status attribute of each PV. You may also see the remaining entry has:
status = ["MISSING"]
If you then attempt a
# vgchange -ay <volume group name>
you may find that the VG will not become active and this file again contains several entries, related to the missing PVs that you thought you’d just removed. The reason for this is that the remaining PV(s) have old meta-data which hasn’t been updated by LVM when you did that lvm vgreduce, earlier. Fear not.
Issue:
# lvm vgextend --restoremissing <volume group name> /path/to/PV
e.g. /path/to/PV might be /dev/sdb1 – you’ll need to know this, no guesses 🙂
After this, inspect the backup config file that you looked at previously. The status should have changed from “missing” to “allocatable”. However, the missing PVs will still be present, so let’s get rid of them:
# lvm vgreduce --removemissing -f <volume group name>
Now take one more look at that config file. It should just contain the PVs that are present and assigned to your VG.
Try activating the VG:
# vgchange -ay <volume group name>
1 logical volume(s) in volume group "<volume group name>" now active
If you’ve got this far, you’re basically on the home straight. Simply mount the LV on the file system and away you go!